Now, I really like Wayland, and it’s definitely better than the mess that is X11

BUT

I think the approach to Wayland is entirely wrong. There should be a unified backend/base for building compositors, something like universal wlroots, so that applications dealing with things like setting wallpapers don’t have to worry about supporting GNOME, Plasma, Wlroots, AND Smithay (when COSMIC comes out). How about a universal Wayland protocol implementation that compositors are built on? That way, the developers of, say, wayshot, a screenshot utility, can be sure their program works across all Wayland compositors.

Currently, the lower-level work for creating a compositor has been done by all four of the GNOME, KDE, Wlroots and Smithay projects. To me, that’s just replication of work and resources. Surely if all standalone compositors, as well as the XFCE desktop want to, and use wlroots, the GNOME and KDE teams could have done the same instead of replicating effort and wasting time and resources, causing useless separation in the process?

Am I missing something? Surely doing something like that would be better?

The issue with X11 is that it got big and bloated, and unmaintainable, containing useless code. None of these desktops use that useless code, still in X from the time where 20 machines were all connected to 1 mainframe. So why not just use the lean and maintainable wlroots, making things easier for some app developers? And if wlroots follows in the footsteps of X11, we can move to another implementation of the Wayland protocols. The advantage of Wayland is that it is a set of protocols on how to make a compositor that acts as a display server. If all the current Wayland implementations disappear, or if they become abandoned, unmaintained, or unmaintainable, all the Wayland apps like Calendars, file managers and other programs that don’t affect the compositor itself would keep on working on any Wayland implementation. That’s the advantage for the developers of such applications. But what about other programs? Theme changers, Wallpaper switchers etc? They would need to be remade for different Wayland implementations. With a unified framework, we could remove this issue. I think that for some things, the Linux desktop needs some unity, and this is one of these things. Another thing would be flatpak for desktop applications and eventually nix and similar projects for lower-level programs on immutable distros. But that’s a topic for another day. Anyways, do you agree with my opinion on Wayland or not? And why? Thank you for reading.

  • theshatterstone54OP
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    I have seen some improvements to be honest. I have never seen screen tearing (was quite common on X11) and Compositors run more smoothly for me, with less resource usage (that is unfortunately taken up by heavy bars like Waybar). For example, Qtile would usually run at about 780 Mb on a coldboot on X11, while on Wayland, it averages at about 580-600 Mb.

    • lloram239@feddit.de
      link
      fedilink
      arrow-up
      4
      arrow-down
      8
      ·
      1 year ago

      The thing is, what are the chances that those improvements needed a complete rewrite and couldn’t just be patched into X11? As for lack of screen tearing, is that even an advantage? In X11 to get rid of it I can do (dependents on driver, but AMD had it for ages):

      xrandr --output HDMI-0 --set TearFree on
      

      But more importantly, I can also do TearFree off to get a more responsiveness. Especially when it comes to gaming that is a very important option to have.

      There are also other things like CSD which I consider a fundamental downgrade to the flexibility that X11 offered.

      • patatahooligan@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        Disabling screen tearing for two or more monitors with different refresh rates is as far as I know impossible within the X11 protocol. This is especially annoying for high-refresh rate VRR monitors which could be tearfree with negligible cost in responsiveness.

        You also can’t prevent processes from manipulating each others inputs/outputs. An X11 system can never have meaningful sandboxing because of this. Maybe you could run a new tweaked and sandboxed X server for every process but at that point you’re working against the protocol’s fundamental design.

        • lloram239@feddit.de
          link
          fedilink
          arrow-up
          5
          arrow-down
          6
          ·
          1 year ago

          You also can’t prevent processes from manipulating each others inputs/outputs.

          That’s one of those pseudo-problems. In theory, yeah, a bit more control over what apps can and can’t access would be nice. In reality, it doesn’t really matter, since any malicious app can do more than enough damage even without having access to the Xserver. The solution is to not run malicious code or use WASM if you want real isolation. Xnest, Xephyr and X11 protocol proxy have also been around for a while, X11 doesn’t prevent you from doing isolation.

          Trying to patch sandboxing into Linux after the fact is not only not giving you isolation that is actually meaningful, it also restricts user freedom enormously. Screenshots, screen recording, screen sharing, keyboard macros, automation, etc. All very important things, suddenly become a whole lot more difficult if everything is isolated. You lose a ton of functionality without gaining any. Almost 15 years later and Wayland is still playing catch up to feature that used to “just work” in X11.

          • patatahooligan@lemmy.world
            link
            fedilink
            arrow-up
            7
            arrow-down
            2
            ·
            1 year ago

            In theory, yeah, a bit more control over what apps can and can’t access would be nice. In reality, it doesn’t really matter, since any malicious app can do more than enough damage even without having access to the Xserver.

            Complete nonsense. Moving away from a protocol that doesn’t allow every single application to log all inputs isn’t “a bit more control over what apps can and can’t access”. We’re switching from a protocol where isolation is impossible to one where it is.

            The notion that if you can’t stop every possible attack with a sandbox then you should not bother to stop any of them is also ridiculous. A lot of malware is unsophisticated and low effort. Not bothering to patch gaping security holes just because there might be malware out there that gets around a sandbox is like leaving all your valuable stuff on the sidewalk outside your house because a good thief would have been able to break in anyway. You’re free to do so but you’ll never convince me to do it.

            The solution is to not run malicious code

            Another mischaracterization of the situation. People don’t go around deliberately running “malicious code”. But almost everyone runs a huge amount of dubious code. Just playing games, a very common use case, means running millions of lines of proprietary code written by companies who couldn’t care less for your security or privacy, or in some cases are actively trying to get your private data. Most games have some online component and many even expose you to unmoderated inputs from online strangers. Sandboxing just steam and your browser is a huge step in reducing the amount of exploitable vulnerabilities you are exposed to. But that’s all pointless if every app can spy on your every input.

            Xnest, Xephyr and X11 protocol proxy have also been around for a while, X11 doesn’t prevent you from doing isolation.

            What’s the point then of a server-client architecture if I end up starting a dedicated server for every application? It might be possible to have isolation this way but it is obviously patched on top of the flawed design that didn’t account for isolation to begin with. Doing it this way will break all the same stuff that Wayland breaks anyway so it’s not a better approach in any way.

            • lloram239@feddit.de
              link
              fedilink
              arrow-up
              4
              arrow-down
              2
              ·
              1 year ago

              Moving away from a protocol that doesn’t allow every single application to log all inputs isn’t “a bit more control over what apps can and can’t access”.

              Every app already has full access to your home directory and can replace every other app simply by fiddling with $PATH. What you get with Wayland is at best a dangerous illusion of security.

              What’s the point then of a server-client architecture if I end up starting a dedicated server for every application?

              Flexibility. I can chose to sandbox things or not too. And given how garbage the modern state of sandboxing still is, I’d rather take that flexibility than being forced to sandbox everything.

              Anyway, to take a step back: Wayland doesn’t actual solve any of this. It just ignores it. Not having a way to record inputs or make screenshots does not improve security, it simply forces the user to find other means to accomplish those task, those means can then be utilized by any malicious app just the same. If you actual want to solve this issue you have to provide secure means to do all those task.

              • patatahooligan@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                I think you misunderstood what I was saying. I’m not saying wayland magically makes everything secure. I’m saying that wayland allows secure solutions. Let’s put it simply

                • Wayland “ignores” all the issues if that’s what you want to call it
                • Xorg breaks attempts to solve these issues, which is much worse than “ignoring” them

                You mentioned apps having full access to my home directory. Apps don’t have access to my home directory if I run them in a sandbox. But using a sandbox to protect my SSH keys or firefox session cookies is pointless if the sandboxed app can just grab my login details as I type them and do the same or more harm as they would if they had the contents of my home directory. Using a sandbox is only beneficial on Wayland. You could potentially use nested Xorg sessions for everything but that’s more overhead, will introduce all the same problems as Wayland (screen capture/global shortcuts/etc), while also having none of the Wayland benefits.

                And given how garbage the modern state of sandboxing still is

                I’m not talking about “the current state” or any particular tool. One protocol supports sandboxing cleanly and the other doesn’t. You might have noticed that display server protocols are hard to replace so they should support what we want, not only what we have right now. If you don’t see a difference between not having a good way to do something right now versus not allowing for the possibility to do something in a good way ever, let’s just end the discussion here. If those are the same to you no argument or explanation matters.

                If you actual want to solve this issue you have to provide secure means to do all those task.

                Yes that exactly the point. Proposed protocols for these features allow a secure implementation to be configured. You would have a DE that asks you for every single permission an app requests. You don’t automatically get a secure implementation, but it is possible. There might be issues with the wayland protocol development processes or lack of interest/manpower by DE/WM developers, or many other things that lead to subpar or missing solutions to current issues, but they are not inherent and unsolvable issues of the protocol.