• rdri@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    3 months ago

    None of that helps or discards anything I’ve said above. But it allows to say that NTFS limit can be basically 1024 bytes. Just because you like what UTF-8 offers it doesn’t solve hurdles with Linux limits.

    LUKS is commonly used but not the only one.

    • jabjoe
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      Linus’s VFS is where the 256 limit is hard. Some Linux filesystem, like RaiserFS, go way beyond it. If it was a big deal, it would be patched and widely spread. The magic of Linux, is you can try it yourself, run your own fork and submit patches.

      LUKS is the one to talk about as the others aren’t as good an approach in general. LUKS is the recommended approach.

      Edit: oh and NTFS is 512 bytes. UTF16 = 16bit = 2 bytes. 256*2 = 512

      • rdri@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 months ago

        The magic of Linux, is you can try it yourself, run your own fork and submit patches.

        Well it should probably go further and offer more of another kind of magic - where stuff works as user expects it to work.

        As for submitting patches, it sounds like you suggest people play around and touch core parts responsible for file system operations. Such an advice is not going to work for everyone. Open source software is not ideal. It can be ideal in theory, but that’s it.

        LUKS is the one to talk about as the others aren’t as good an approach in general. LUKS is the recommended approach.

        It looks like there are enough use cases where some people would not prefer LUKS.

        • jabjoe
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          I have lived quite happily, on pretty much only open source for over 12 years now. Professionally and at home (longer at home). Debian I put with Wikipedia as an example of what humans can be.

          There is no gate keepers in who can do what where. Only on who will accept the patches. Projects fork for all kinds of reasons, though even Google failed to fork the Linux kernel. If there is some good patch to extend the filename limit, it will get in. Enough pressure and maybe the core team of that subsystem will do it.

          Open source already won I’m affriad. Most of the internet, IoT to super computers, runs open source. Has been that way for a while. If you use Windows, fine, but it is just a consumer end node OS for muggels. 😉

          If you setup a new install, and say you want encryption, LUKS is what you get.

          • rdri@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Does it look like I advocate for windows? Nah.

            Open source is great when it works. “If there is some good patch…” and “Enough pressure and maybe…” is the sad reality of it. Why would people need to put pressure on order for Linux to start supporting features long available in file systems it supports? Why would I, specifically, should spend time on it? Does Linux want to become an os for everyone or only for people experimenting with dangerous stuff that make them lose data sometimes?

            Don’t get me wrong, Linux is good even now. But there is no need to actively deny points of possible improvement. When they ask you how great XFS is compared to others you shouldn’t throw “exbibytes” word, you should first think what problems people might have with it, especially if they want to switch from windows.

            If you setup a new install, and say you want encryption, LUKS is what you get.

            And if I want to only encrypt some files? I need to create a volume specifically for that, right? Or I could just use something else.

            • jabjoe
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 months ago

              Open source clearly works because of the scale and breath of it’s use. That’s the modern world and its use is only increasing. This a good thing for multiple reasons.

              Unicode filename length clearly isn’t as big an issue as you feel or it would be fixed. There is some BIG money that could be spent to fix this for countries and companies who need unicode.

              How you encrypt depends on your aim. If you aim is limit your character available for filenames, there are ways. If it’s read only, you do a GPG tar ball. LUKS if you want a live system. You can just create a file, LUKS format it.

              Resetup

              sudo fallocate -l 1G test.img
              sudo cryptsetup luksFormat --type luks2 test.img
              sudo cryptsetup luksOpen test.img myplace
              sudo mkfs.ext4 /dev/mapper/myplace
              sudo mkdir /mnt/myplace
              sudo mount /dev/mapper/myplace /mnt/myplace
              

              close

              sudo umount /mnt/myplace
              sudo cryptsetup luksClose myplace
              

              reopen

              sudo cryptsetup luksOpen test.img myplace
              sudo mount /dev/mapper/myplace /mnt/myplace
              

              Basically the same as systemd-homed does for you: https://wiki.archlinux.org/title/Systemd-homed

              But there are many ways. A good few filesystems offer folder/file encryption natively. Though I’d argue that’s less secure.

              • rdri@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                clearly isn’t as big an issue as you feel or it would be fixed.

                I might have agreed with such statements 20 years ago. But not anymore. I can’t count the times I’ve seen how certain software, game, system or a service literally brick themselves when a use case involves using non-ascii, non-english or non-unicode characters, paths or regions. Not Linux related only or specifically, but almost always it looks and feels embarrassing. I’ve seen some related global improvements in windows, NTFS, and some products, but all that is still not enough in my opinion. The thought that people shouldn’t need >255 bytes (or symbols) sounds not different from that 640k ram quote.

                • jabjoe
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  I doubt the Linux kernel bricks itself when filename are too long, regardless of encoding. It doesn’t do characters, but just bytes. If there is too maybe bytes, they just get trimmed. User level above I can certainly believe. On all platforms. Difference is you can fix it in the open world and throw a patch. It’s an embarrassing crash, and will be a simple fix, so it will get in. Closed products, well maybe you can log it, maybe they will fix it, but your in serfdom unless you have real money and other options.

                  The other thing that makes me think this can’t be as a big an issue as you say is, the example you gave, still looks bloody long. Seams like doing it wrong if the filename is a sentence. It filename, not filesentence.

                  This tiny, and seemingly silly, thing, doesn’t make Windows and NTFS not laughablely in 2024.

                  • rdri@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 months ago

                    You aren’t getting it.

                    It’s not about bricking, it’s about relying on “standards” (limitations actually) that should be obsolete in 2024, in multinational technology world. About the fact that they are effectively limiting how people from all around the world can use characters, words, names etc. anywhere.

                    It’s not about money, not about patches or developing them. It’s about what users expect. They surely don’t expect to be told “fix it yourself if you don’t like”.

                    This is by no means a “big” issue because it affects less than 1 percent of users, sure. Not many people hit the NTFS limit on windows either, yet you can see thousands places where people discuss that long paths setting, people who need to overcome it, people who maybe even grateful that such an option appeared in later windows versions.

                    It filename, not filesentence.

                    😒 Yep, that’s useless. What’s next, “hey Linux doesn’t support .exe, those are games for windows so you play them on windows”?