A few weeks ago Lemmy was buggy on computers and there were no good mobile clients out there, now on PC the site is pretty stable and fast, and there are now some pretty good iOS/Android clients too. Thanks to all the people who made this possible!

  • SneakyWaffles@vlemmy.net
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    Dude, I think you’re just ignorant of how web hosting works. Every single site you visit is hosted on probably dozens or more servers so that it can load balance or guarantee better uptime. It’s normal. It’s weird to be in a place that is only on one server.

    Being able to host a stable site doesn’t mean everything is suddenly moving into one instance either. The NBA subreddit for example, a single community, has millions of members. Lemmy can’t handle anything like that. And technologically having no way to support large communities is a guaranteed way to kill your app.

    You also seem to be very in favor of spreading out and decentralizing… except for Beehaw. Wonder why you’re such a purist for decentralization except in this case. Weird. Being able to defederate, make moderation decisions for yourself, and making big decisions like that to defend your community is the whole point of these sites. Maybe you should go back to Reddit if you aren’t able to handle it. And for the record, you’d have to be blind to not see moderation controls are lacking at best for this brand new actively being developed site.

    • rglullis@communick.news
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      1 year ago

      Dude, I think you’re just ignorant of how web hosting works.

      I run a managed hosting service for Mastodon and Lemmy, but yeah…

      Every single site you visit is hosted on probably dozens or more servers so that it can load balance or guarantee better uptime.

      Hacker News: one single FreeBSD box. Not even a database.

      Also, your cargo-cult is showing… talking about “load balance” as a guarantee of uptime is the same as justifying using Mongo because it is webscale

      • SneakyWaffles@vlemmy.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        1 year ago

        You sound like an old script kiddie who says they’re a hacker cause they ran a script from a forum. If it wasn’t obvious, I’m talking about actual web architecture. Not hobby junk. Managing to standup a tiny virtual instance for a few people does not mean that you understand anything.

        As I said, this I basic architecture shit. Like, an intern would understand the idea kinda basic.

        talking about “load balance” as a guarantee of uptime is the same as justifying using Mongo because it is we scale

        ??? Are you unironically implying that a site with a backend that has multiple servers stood up to spread the load won’t have tremendously better capacity, redundancy, and as a result better uptime than a single hobby pc in your living room or whatever you have setup?

        • rglullis@communick.news
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          Can you please stop with the unnecessary snark and this silly attempt at dick-measuring? Are you upset at something?

          Are you unironically implying that a site with a backend that has multiple servers stood up to spread the load won’t have tremendously better capacity, redundancy…

          No. I am saying that the majority of websites out there don’t need to pay the costs or worry about this.

          Good engineering is about understanding trade-offs. We can be talking all day about the different strategies to have 4, 5 or 6 nines of availability, but all that would be pointless if the conversation is not anchored in how much will be the cost of implementing and operating such a solution.

          Lemmy - like all other social media software - does not need that. There is nothing critical about it. No one dies if the server goes offline for a couple of minutes in the month. No business will stop making money if we take the database down to do a migration instead of using blue-green deployments. Even the busiest instances are not seeing enough load to warrant more servers and are able to scale by simply (1) fine-tuning the database (which is the real bottleneck) and (2) launching more processes.

          Anyone that is criticizing Lemmy because “it can not scale out” is either talking out of their ass or a bad engineer. Possibly both.