My favorite is pacman because it is fast af but it has really weird syntax’s

        • nyan@lemmy.cafe
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          No one’s fielded this yet, so I’ll give it a shot.

          Portage offers maximum configurability: you can switch optional package features on and off. If a package feature is off, you don’t need to install dependencies to support it, so it makes for a slimmer system.

          You can upgrade many packages even if the distribution hasn’t by copying a single small file to a new name and running two commands.

          Similarly, if you’re running a new or fringe architecture (like riscv) and want to try to install a package that isn’t officially available for it, you can do it fairly simply (minor edit to a text file or additional parameter at the command line). Doesn’t always work, but it’s still easier than the configure-make-make_install dance, and the dependencies are handled for you.

          Portage also supports a bunch of other fringe use cases, like pulling source straight from git and building it. And you can create simple packages by writing <10 lines of text file (well, specialized bash shell script).

          On the downside, Portage is S-L-O-W. It has more complicated dependency trees to resolve than other package managers, and installs most packages by building them from source (although this isn’t a requirement).

          I like it, though.

    • Joe B@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      I used to like portage a lot when I first tried gentoo. I was like dam I really have to build every single thing. I just want this. don’t get me wrong Gentoo keeps your system maintained clean and minimal but just the time compiling got my wife angry lol

      • Illecors@lemmy.cafe
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        It can get tedious on a single machine. Once you have enough for a binhost to start making sense… Now we’re talking 🤣

          • Illecors@lemmy.cafe
            link
            fedilink
            arrow-up
            3
            ·
            2 years ago

            It’s some computing device (technically a smart toaster could do it) that shares the binaries over the network to other machines. Normally stuff is compiled for the lower common denominator when it comes to CPU architecture and supported features.

            I have it as a VM, some people do it on bare metal. I’m trying to to have multiple CPU architectures supported by cheating a bit with BTRFS snapshots at the moment; time will tell if it works out.

            • Joe B@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              2 years ago

              Got it.

              Never got into btrfs I see the value in it like something crashes or goes down you can go back to that snapshot and everything comes down but I just never really had issues. I distro hop also so i don’t know when I hope its spontaneous. Maybe one of these days I will get back to Arch and play with it

              • Illecors@lemmy.cafe
                link
                fedilink
                arrow-up
                3
                ·
                2 years ago

                The ability to come back is awesome, although I have never had a reason to use it.

                For a distro hopper like yourself it would actually make like so much easier! Because of how subvolumes work - you can have every distro in a separate subvolume. They can share the home subvolume if you like, or not. You can have upgrades with a failsafe of sorts for the likes of Ubuntu, which, in my limited personal experience, have never ever been without issues.

                Having a server subvolume to run portage in and then snapshotting it to a desktop one, applying desktop config saves some time on recompiling the big friends like gcc and llvm.

                I did not understand the point of BTRFS at first as well, especially since it was slower than ext4. But since having started using it I’ve found that there are now scenarios that were not possible before or were incredibly complicated. Like read-only root, incremental backups over the network (yes, rsync exists, but this feels cleaner)