Just thinking of the poor sods that are going to be working today (and all night).

Oh and more pics since the bingilator is not without its randomness.

  • Kojichan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Thank you so much for that wonderful information! No joke!

    I’ll have some new things to test when I actually start generating. I’m doing the local generation in Linux. I have a 12GB GPU, 32GB ram, so I know about some slowdowns.

    But, I was talking specifically about ComfyUI, the native web app that you launch in a browser. I can work with it for a little bit, but once I stay in the window too long (even without generating), it starts flipping frames between the nodes and a different set of nodes.

    Not sure what that issue is. Can’t even save or load workspaces properly… I’m going to blame the Snap Firefox I’m using… Maybe I’ll try something else that’s not a Snap, or a Flatpak.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      it starts flipping frames between the nodes and a different set of nodes.

      Yeah, I don’t know what would cause that. I use it in Firefox.

      Maybe try opening it in Chromium or a private window to disable addons (if you have your Firefox install set up not to run addons in private windows?)

      I’m still suspicious of resource consumption, either RAM or VRAM. I don’t see another reason that you’d suddenly smack into problems when running ComfyUI.

      I’m currently running ComfyUI and Firefox and some relatively-light other stuff, and I’m at 23GB RAM used (by processes, not disk caching), so I wouldn’t expect that you’d be running into trouble on memory unless you’ve got some other hefty stuff going on. I run it on a 128GB RAM, 128GB paging NVMe machine, so I’ve got headroom, but I don’t think that you’d need more than what you’re running if you’re generating stuff on the order of what I am.

      goes investigating

      Hmm. Currently all of my video memory (24GB) is being used, but I’m assuming that that’s because Wayland is caching data or something there. I’m pretty sure that I remember having a lot of free VRAM at some point, though maybe that was in X.

      considers

      Let me kill off ComfyUI and see how much that frees up. Operating on the assumption that nothing immediately re-grabs the memory, that’d presumably give a ballpark for VRAM consumption.

      tries

      Hmm. That went down to 1GB for non-ComfyUI stuff like Wayland, so ComfyUI was eating all of that.

      I don’t know. Maybe it caches something.

      experiments further

      About 17GB (this number and others not excluding the 1GB for other stuff) while running, down to 15GB after the pass is complete. That was for a 1280x720 image, and I was loading the SwinIR upscaler; while not used, it might be resident in VRAM.

      goes to set up a workflow without the upscaler to generate a 512x512 image

      Hmm. 21GB while running. I’d guess that ComfyUI might be doing something to try to make use of all free VRAM, like, do more parallel processing.

      Lemme try with a Stable Diffusion-based model (Realmixxl) instead of the Flux-based Newreality.

      tries

      About 10GB. Hmm.

      kagis

      https://old.reddit.com/r/comfyui/comments/1adhqgy/how_to_run_comfyui_with_mid_vram/

      It sounds like ComfyUI also supports the --midvram and --lowvram flags, but that it’s supposed to automatically select something reasonable based on your system. I dunno, haven’t played with that myself.

      tries --lowvram

      I peak at about 14GB for ComfyUI at 512x512, was 13GB for most of generation.

      tries 1280x720

      Up to 15.7GB, down to 13.9GB after generation. No upscaling, just Newreality.

      Hmm. So, based on that testing, I wouldn’t be incredibly surprised if you might be exhausting your VRAM if you’re running Flux on a GPU with 12GB. I’m guessing that it might be running dry on cards below 16GB (keeping in mind that it looks like other stuff is consuming about 1GB for me). I don’t think I have a way to simulate running the card with less VRAM than it physically has to see what happens.

      Keep in mind that I have no idea what kind of memory management is going on here. It could be that pytorch purges stuff if it’s running low and doesn’t actually need that much, so these numbers are too conservative. Or it could be that you really do need that much.

      Here’s a workflow (it generates a landscape painting, something I did a while back) using a Stable Diffusion XL-based model, Realmixxl (note: model and webpage includes NSFW content), which ran with what looked like maximum VRAM usage of about 10GB on my system using the attached workflow prompt/settings. You don’t have to use Realmixxl, if you have another model, should be able to just choose that other one. But maybe try running it, see if those problems go away? Because if that works without issues, that’d make me suspicious that you’re running dry on VRAM.

      realmixxx.json.xz.base64
      /Td6WFoAAATm1rRGBMDODKdlIQEWAAAAAAAAAJBbwA/gMqYGRl0APYKAFxwti8poPaQKsgzf7gNj
      HOV2cLGoVLRUIxJu+Mk99kmS9PZ9/aKzcWYFHurbdYORPrA+/NX4nRVi/aTnEFuG5YjSEp0pkaGI
      CQDQpU2cQJVvbLOVQkE/8eb+nYPjBdD/2F6iieDiqxnd414rxO4yDpswrkVGaHmXOJheZAle8f6d
      3MWIGkQGaLsQHSly8COMYvcV4OF1aqOwr9aNIBr8MjflhnuwrpPIP0jdlp+CJEoFM9a21B9XUedf
      VMUQNT9ILtmejaAHkkHu3IAhRShlONNqrea4yYBfdSLRYELtrB8Gp9MXN63qLW582DjC9zsG5s65
      tRHRfW+q7lbZxkOVt4B21lYlrctxReIqyenZ9xKs9RA9BXCV6imysPX4W2J3g1XwpdMBWxRan3Pq
      yX5bD9e4wehtqz0XzM38BLL3+oDneO83P7mHO6Cf6LcLWNzZlLcmpvaznZR1weft1wsCN1nbuAt5
      PJVweTW+s1NEhJe+awSofX+fFMG8IfNGt5tGWU5W3PuthZlBsYD4l5hkRilB8Jf7lTL60kMKv9uh
      pXv5Xuoo9PPj2Ot2YTHJHpsf0jjT/N5Z65v9CDjsdh+gJ5ZzH8vFYtWlD1K6/rIH3kjPas23ERFU
      xoCcYk7R6uxzjZMfdxSy1Mgo6/VqC0ZX+uSzfLok1LLYA7RcppeyY4c/fEpcbOLfYCEr9V+bwI4F
      VDwzBENC412i8JTF8KzzqA0fPF8Q4MwAeBFuJjVq1glsFgYqTpihnw5jVc5UfALRSXS2vjQR78v5
      XDmiK7EvUIinqDJjmCzV+dpnTbjBAURsZNgCU+IJEQeggVybB+DkjOGnr/iIjvaSylO3vu9kq3Kn
      Dhzd/kzXutPecPtterHkiPjJI+5a9nxJPMLMuAqmnsh2sk7LX6OWHssHhxd/b2O2Y4/Ej0WoIZlf
      GD2gOt57hHvTrQ/HaG1AA8wjbHsZXWW9MXbJtDxZbECLIYGfPW2tQCfBaqYlxGXurrhOfZlKPUVx
      K9etDItoDHdwUxeM4HbCdptOjcSWAYgjwcQ4gI3Ook/5YLRbL+4rIgOIwz643v/bMh2jiLa4MEYm
      9a4O8GL4xED3kvRQSgt0SkkIRIHO1AJ042TQC450OKwEtfFgBpmFQ+utgUOObjA409jIEhMoOIeP
      kDqv62f0Uu4qojiX7+o8rrWp/knAkDoFWam1E3ZKEiCINRfmRAMqTsPr8Wq/TQZM5OKtMGBLK9LY
      GxLOOUBCahU5iNPm01X3STNRmQKtATPgqPcszNeLONnZqcWusbZKvwkCoX4Z75T+s+852oo65Li6
      7WQ3SaDBrs47qXeUobVIOjlXO2ln2oRRtTRfbk7/gD6K6s5kBjPexHEEIGseJysilyHyT2+VMtSy
      cyET83Exi5KEnrtl7XgMI4GM1tDeaq6anNdW1VgXdS4ypk8xqHTpQgszuJCgh3ID5pfvbHzzX0A7
      zC5A+4oGk98ihe0dJc+KLzigJuZLk7jX5A7sGkBtht7oKjH8qrYM//DbQXkZbI06h/FP+2aBz5/t
      U3zTsSHyoU3LwihFOj0TA+DKnZUnm4TJtX6ABQaJPTYwHgQJ/B77VI9A+RYf7qd9o4cGaGrLoOES
      QdGPFUOqO0vL9EkpTsxgCEjxApBIw1gTCiaBF8Dofp6vBrd1zY1mXP9p1UunaaFZtdmx/vrWkLXQ
      iO09P6waY+6daKtZ7i+3j0WGvBFHx32toDgd94wGWXBa+hfEEK3d6kq8eGRWJC+OEL9KgUrrN4ki
      vwPjGe/1DXxzPIvZrMP2BtoxO34E9VuvsnmofW3kZvtLZXC+97RznZ5nIpG4Vk+uOPs1ne/s1UD3
      x0vyTviwiK+lFIu5T3NdxFFssClZSDyFUkzCZUpbsLjcH3dzbwEDX4Vjq6rAz2IbXUGU6aTQ7RO1
      Q1iUHaCqHZzNJEKKFcBaL/IGbmeOPUZJ7G3TbEYcMhBPtsmYSwNJkQ0cGj/KKqPF6fxpvNEt+QNh
      isgyNP+AuP0xxQgwXbxW2kO/3Y70L5+eNs2L8u0gJBHevYTAebv/mORBcNcs8hpFVZLOAahajv0h
      zj++ssD9BcgBTVMEC+knn0HjVaRjIW3UPsDApNjIsF7h06hWAGG79VGJb3mQ6PcwQAAAALZxoY8E
      al4jAAHqDKdlAABPKdovscRn+wIAAAAABFla
      

      EDIT: Keep in mind that I’m not an expert on resource consumption on this, haven’t read material about what requirements are, and there may be good material out there covering it. This is my ad-hoc, five-minutes-or-so-of testing; my own solution was mostly to just throw hardware at the problem, so I haven’t spent a lot of time optimizing workflows for VRAM consumption.

      EDIT2: Some of the systems (Automatic1111 I know, dunno about ComfyUI) are also capable, IIRC, of running at reduced precision, which can reduce VRAM usage on some GPUs (though it will affect the output slightly, won’t perfectly reproduce a workflow), so I’m not saying that the numbers I give are hard lower limits; might be possible to configure a system to operate with less VRAM in some other ways. Like I said, I haven’t spent a lot of time trying to drive down ComfyUI VRAM usage.

      • Kojichan@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        I’ll definitely give that a shot!

        That said, it’s just ComfyUI behaving this way. Nothing is loaded, no LLMs, just the UI node editor with nodes connected.

        The nodes are asking for the appropriate files, but I haven’t selected them. This happens.

        Then, even when I have selected the files, it also misbehaves.

        Not even generating anything. No input was given. I have also tried the ComfyUI fox girl node layout by dragging the image into the ComfyUI window. It can also wig out weirdly.

        The generator queue is idling as per normal.

        Anywho, thanks again! I’m going to try a different browser and maybe update the repository again.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          it’s just ComfyUI behaving this way. Nothing is loaded, no LLMs

          Oh, okay, then I’m probably wrong on VRAM, then. On my system, it needs to actually run the nodes before the VRAM gets allocated. Sorry! I thought I had it…

          • Kojichan@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            No worries! You gave me a LOT of information that I actually needed otherwise. Like the console commands. :D

            I may need to try another variation of the CivitAI Flux Dev model. I had the low memory one, and the other bits and bobs, but it would freeze my computer solid. I’ll try your tips when that happens! Your information will also be extremely useful for others!!

            You did a good job! :)