Day 18: Ram Run

Megathread guidelines

  • Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
  • You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL

FAQ

  • Acters@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 days ago

    Part 2 can be faster if you iteratively remove blocks until there is a path. This is because it is faster to fail to find a path and the flood fill algorithm does not need to fill as many spots because the map would be filled up with more blocks! this drops the part 2 solve to a few milliseconds. others have taken a binary search option which is faster.

    • sjmulder@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      Thanks, that’s exactly the sort of insight that I was too tired to have at that point 😅

      The other thing I had to change was to make it recursive rather than iterating over the full grid - the latter is fast for large update, but very wasteful for local updates, like removing the points. Virtually instant now!

      Code
      #include "common.h"
      
      #define SAMPLE	0
      #define PTZ	3600
      #define GZ	(SAMPLE ? 9 : 73)
      #define P1STEP	(SAMPLE ? 12 : 1024)
      #define CORR	-1
      
      static int g[GZ][GZ];
      
      static void
      flood(int x, int y)
      {
      	int lo=INT_MAX;
      
      	if (x <= 0 || x >= GZ-1 ||
      	    y <= 0 || y >= GZ-1 || g[y][x] == CORR)
      		return;
      
      	if (g[y-1][x] > 0) lo = MIN(lo, g[y-1][x] +1);
      	if (g[y+1][x] > 0) lo = MIN(lo, g[y+1][x] +1);
      	if (g[y][x-1] > 0) lo = MIN(lo, g[y][x-1] +1);
      	if (g[y][x+1] > 0) lo = MIN(lo, g[y][x+1] +1);
      
      	if (lo != INT_MAX && (!g[y][x] || g[y][x] > lo)) {
      		g[y][x] = lo;
      
      		flood(x, y-1);
      		flood(x, y+1);
      		flood(x-1, y);
      		flood(x+1, y);
      	}
      }
      
      int
      main(int argc, char **argv)
      {
      	static int xs[PTZ], ys[PTZ];
      	static char p2[32];
      	int p1=0, npt=0, i;
      
      	if (argc > 1)
      		DISCARD(freopen(argv[1], "r", stdin));
      
      	for (i=0; i<GZ; i++)
      		g[0][i] = g[GZ-1][i] =
      		g[i][0] = g[i][GZ-1] = CORR;
      
      	for (npt=0; npt<PTZ && scanf(" %d,%d", xs+npt, ys+npt)==2; npt++) {
      		assert(xs[npt] >= 0); assert(xs[npt] < GZ-2);
      		assert(ys[npt] >= 0); assert(ys[npt] < GZ-2);
      	}
      
      	assert(npt < PTZ);
      
      	for (i=0; i<npt; i++)
      		g[ys[i]+1][xs[i]+1] = CORR;
      
      	g[1][1] = 1;
      	flood(2, 1);
      	flood(1, 2);
      
      	for (i=npt-1; i >= P1STEP; i--) {
      		g[ys[i]+1][xs[i]+1] = 0;
      		flood(xs[i]+1, ys[i]+1);
      
      		if (!p2[0] && g[GZ-2][GZ-2] > 0)
      			snprintf(p2, sizeof(p2), "%d,%d", xs[i],ys[i]);
      	}
      
      	p1 = g[GZ-2][GZ-2]-1;
      
      	printf("18: %d %s\n", p1, p2);
      	return 0;
      }
      
      • Acters@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Wooo! instant is so good, I knew you could do it! When I see my python script getting close to 20 ms, I usually expect my fellow optimized language peers to be doing it faster. Pretty surprised to see so many varying solutions that ended up being a little slower just because people didnt realize the potential of speed from failing to find a path.

        The first part has a guaranteed path! if you think about a binary search, when there is a path then the block is higher up the list, so we ignore the lower blocks in the list. move to the next “midpoint” to test and just fill and remove blocks as we go to each mid point. So I took the first part as the lower point and moved to a mid point above that.

        at least that is how I saw it, when I first looked, but binary search is a little harder to think of than just a simple for loop from the end of the list back. Yet I still got it done! Even included a dead end filler that takes 7 ms to show the final path for Part 2, it was not needed but was a neat inclusion!

        • sjmulder@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Awesome! I understood the idea behind the binary search but thought it wasn’t a good fit for the flood fill. As opposed to something like A* it will give you reachability and cost for every cell (at a cost), but that’s no use when you do repeated searches that are only meant to find a single path. So I was very happy with your suggestion, it fits better with the strengths.

          “Virtually instant” btw is measured 0.00 by time. I like it when things are fast but I also prefer simper approaches (that is: loops and arrays) over the really optimized fast stuff. People do really amazing things but the really clever algorithms lean on optimized generic data structures that C lacks. It’s fun though to see how far you can drive loops and arrays! Perhaps next year I’ll pick a compiled language with a rich data structure library and really focus on effectively applying good algorithms and appropriate data structures.

          Btw how do you measure performance? I see a lot of people including timing things in their programs but I can’t be bothered. Some people also exclude parsing - which wouldn’t work for me because I try to process the input immediately, if possible.

          • Acters@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            2 days ago

            On the topic about flood fill and other path finding algorithms. I do think your method is quite fast. However, I saw on reddit someone saw Part 2 as more of a tree phenomena called “Crown shyness” where two trees limit their growth to prevent touching each other.

            so the idea behind the “Crown shyness” approach is that when you add a block, you find which corner(top right or bottom left) it is connect to(or in union) until one block connects both corners. so instead of path finding, you are connecting walls to one side. This is also called the “Union-Find algorithm” and the optimization is that when a block drops, you calculate what it is connect with. you can find some visualization of it as that would make it easier to see. This method is by far way more performant, because you can be sure that with all the blocks placed, then the blocks are all in one union, but as you remove blocks you eventually have two unions appear! That block would be the solution.

            Your flood fill is mimicking this closely but instead of union of walls, it is finding if there is a union between the start and end nodes, or top left node with bottom right node. When that wall that blocks the path is placed, it will create two unions for the start and end node.

            • sjmulder@lemmy.sdf.org
              link
              fedilink
              arrow-up
              1
              ·
              2 days ago

              I think I saw the same! At first I thought it requires pathfinding to see what nodes are connected to the wall, but then someone pointed at disjoint sets and just a glance at Wikipedia made it click right away. What an ingeniously simple but useful data structure! Maybe I’ll reimplement my solution with that - mostly as an exercise for disjoint sets and finding a convenient representation for that in C.

              • Acters@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                2 days ago

                That would be cool af to see in C, let me know if you do. In python, we can built the two sets, and have the convenient function call of set( [iterate-able object/list/set] ).intersection( [iterate-able object/list/set] ) to see if the two sets touches/intersects as the block that connects the two sets would be in both sets/lists.

                The way I would build the two sets would be to start at the final state with all blocks placed and just union-find all the blocks. When we find that a block appears in both sets, then we stop the union and proceed with the other unions until we find all the blocks that would appear in both sets. then we iteratively find the first block that would appear in both sets. In python the intersection call returns a set, so you can stack the intersect call. like so: set( [top right union set] ).intersection( [bottom left union set] ).intersection( [ one item list with the current block we are checking ] ) technically you can just save the intersections of the first two sets to save a little time because they would not change.

                I didn’t think of this until recently, but I also think it is such a simple and elegant solution. Live and learn! 😄

                hope you are having a good holiday season!

          • Acters@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 days ago

            ah, I exclude loading and reading the file. but since you are pasting it from pasting into the terminal, that is alright.

            My main gripe is that I am looking at the performance of the algorithm/functions over the performance of the disk access and read, the startup/end overhead. Python is notorious in having overhead during startup for loading the code and before execution occurs. Why should I measure the performance of the language too harshly? I rather look at how my code performs. In windows, the python overhead adds 30-40 ms, while on Linux, it performs faster with only an overhead of consistent 20 ms. Though, that is just without importing heavy or many libraries. If startup is a concern, then a precompiled non-interpreted language is a better option.(along with the other benefits) This is my reasoning for only measuring my algorithm. I do include parsing the input as that is part of the challenge, but I do see there are reasons not to do that. when you are looking for code that is performant, you want to scientifically remove too many variables. If you are to reuse some code, lets say the input to that function is already parsed and then you just want performance. However, I do measure my parsing because for the AoC, I want to think about what would be faster to parsing and what is a bad parsing.

            For AoC, I find a language overhead is not part of the challenge. we should rather learn new languages when we want or use what is comfortable. however, languages like Uiua with a lot of specialty functions is just not worth measuring performance as the main code is just a simple “function call”

            I am sure there is a python package/module that includes a fast path finder, too. I just want to challenge myself mostly to learn instead. however, I am finding I would need to start learning rust instead, because my python skills are starting to plateau.