Day 18: Ram Run

Megathread guidelines

  • Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
  • You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL

FAQ

  • sjmulder@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Awesome! I understood the idea behind the binary search but thought it wasn’t a good fit for the flood fill. As opposed to something like A* it will give you reachability and cost for every cell (at a cost), but that’s no use when you do repeated searches that are only meant to find a single path. So I was very happy with your suggestion, it fits better with the strengths.

    “Virtually instant” btw is measured 0.00 by time. I like it when things are fast but I also prefer simper approaches (that is: loops and arrays) over the really optimized fast stuff. People do really amazing things but the really clever algorithms lean on optimized generic data structures that C lacks. It’s fun though to see how far you can drive loops and arrays! Perhaps next year I’ll pick a compiled language with a rich data structure library and really focus on effectively applying good algorithms and appropriate data structures.

    Btw how do you measure performance? I see a lot of people including timing things in their programs but I can’t be bothered. Some people also exclude parsing - which wouldn’t work for me because I try to process the input immediately, if possible.

    • Acters@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 days ago

      On the topic about flood fill and other path finding algorithms. I do think your method is quite fast. However, I saw on reddit someone saw Part 2 as more of a tree phenomena called “Crown shyness” where two trees limit their growth to prevent touching each other.

      so the idea behind the “Crown shyness” approach is that when you add a block, you find which corner(top right or bottom left) it is connect to(or in union) until one block connects both corners. so instead of path finding, you are connecting walls to one side. This is also called the “Union-Find algorithm” and the optimization is that when a block drops, you calculate what it is connect with. you can find some visualization of it as that would make it easier to see. This method is by far way more performant, because you can be sure that with all the blocks placed, then the blocks are all in one union, but as you remove blocks you eventually have two unions appear! That block would be the solution.

      Your flood fill is mimicking this closely but instead of union of walls, it is finding if there is a union between the start and end nodes, or top left node with bottom right node. When that wall that blocks the path is placed, it will create two unions for the start and end node.

      • sjmulder@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        I think I saw the same! At first I thought it requires pathfinding to see what nodes are connected to the wall, but then someone pointed at disjoint sets and just a glance at Wikipedia made it click right away. What an ingeniously simple but useful data structure! Maybe I’ll reimplement my solution with that - mostly as an exercise for disjoint sets and finding a convenient representation for that in C.

        • Acters@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 days ago

          That would be cool af to see in C, let me know if you do. In python, we can built the two sets, and have the convenient function call of set( [iterate-able object/list/set] ).intersection( [iterate-able object/list/set] ) to see if the two sets touches/intersects as the block that connects the two sets would be in both sets/lists.

          The way I would build the two sets would be to start at the final state with all blocks placed and just union-find all the blocks. When we find that a block appears in both sets, then we stop the union and proceed with the other unions until we find all the blocks that would appear in both sets. then we iteratively find the first block that would appear in both sets. In python the intersection call returns a set, so you can stack the intersect call. like so: set( [top right union set] ).intersection( [bottom left union set] ).intersection( [ one item list with the current block we are checking ] ) technically you can just save the intersections of the first two sets to save a little time because they would not change.

          I didn’t think of this until recently, but I also think it is such a simple and elegant solution. Live and learn! 😄

          hope you are having a good holiday season!

    • Acters@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 days ago

      ah, I exclude loading and reading the file. but since you are pasting it from pasting into the terminal, that is alright.

      My main gripe is that I am looking at the performance of the algorithm/functions over the performance of the disk access and read, the startup/end overhead. Python is notorious in having overhead during startup for loading the code and before execution occurs. Why should I measure the performance of the language too harshly? I rather look at how my code performs. In windows, the python overhead adds 30-40 ms, while on Linux, it performs faster with only an overhead of consistent 20 ms. Though, that is just without importing heavy or many libraries. If startup is a concern, then a precompiled non-interpreted language is a better option.(along with the other benefits) This is my reasoning for only measuring my algorithm. I do include parsing the input as that is part of the challenge, but I do see there are reasons not to do that. when you are looking for code that is performant, you want to scientifically remove too many variables. If you are to reuse some code, lets say the input to that function is already parsed and then you just want performance. However, I do measure my parsing because for the AoC, I want to think about what would be faster to parsing and what is a bad parsing.

      For AoC, I find a language overhead is not part of the challenge. we should rather learn new languages when we want or use what is comfortable. however, languages like Uiua with a lot of specialty functions is just not worth measuring performance as the main code is just a simple “function call”

      I am sure there is a python package/module that includes a fast path finder, too. I just want to challenge myself mostly to learn instead. however, I am finding I would need to start learning rust instead, because my python skills are starting to plateau.