Seems to me the fear of overloading one instance over another will not happen after all.
But I do hope the Threadiverse can hit 500,000 consistent active users by the end of summer.
Give me that hopium guys! 💉
Seems to me the fear of overloading one instance over another will not happen after all.
But I do hope the Threadiverse can hit 500,000 consistent active users by the end of summer.
Give me that hopium guys! 💉
I think a problem for new users is failing to understand how the Fediverse works. It’s not something apparent and not something you can expect everyone to understand right off the bat. A user may start out on a heavily loaded instance and get discouraged by poor response. They either figure out they need to find a better instance or base their opinion of the whole on that one experience and give up altogether.
Lemmy.ml and lemmy.world can suffer from heavy user load and bog down at times. That situation can be avoided by selecting an instance that’s not too heavily loaded. There’s a large number to choose from. It may be necessary to shop around for a good one. In technical terms, find a regionally local instance with low hops, fast ping, and good server response. Also admin settings and quality can be a consideration. I actually signed up on four instances before I found one I really liked.
I’ve been here for a few weeks now… And I’m still not entirely sure how fediverse works. I was under the impression that it didn’t really matter which instance you sign up for, they would still communicate with one another.
I think there needs to be a better/simplified explanation on the website of how everything works.
Ideally speaking, it shouldn’t matter much which instance you pick, but that’s one of the biggest miscommunications about how all this stuff works, speaking ideally rather than realistically.
Realistically instance choice matters both regarding technical stuff like how well it handles traffic and social stuff like whether folks are discussing anything that interests you to begin with and whether the instance’s moderation style appeals to you. When all of this pans out, the tech should fade into the background, but as we’ve seen, it’s early days yet in that regard.
Lemmy uses a queue to send out activities. The size of this queue is specified by the config value federation.worker_count. Very large instances might need to increase this value. Search the logs for “Activity queue stats”, if it is consistently larger than the worker_count (default: 64), the count needs to be increased.
I’m with you except for the “whether folks are discussing anything interesting to you.” So maybe I’m not understanding that part. On Memmy, or Wefwef, or a variety of apps, I’m fed posts from all different instances so it really doesn’t matter to me what instance is my home base provided that I agree with the moderation style and they are fully federated. Is it just because I’m using a third-party app that my choice of home base doesn’t matter as much?
So, it’s a subtle detail and may not matter for many folks, but your instance choice affects your remote communities via the All feed, as the people there choose which of those to subscribe to & presumably discuss the posts there & post there themselves. It’s something that isn’t as clear on Lemmy yet as many instances are more general subject than focused at the moment and communities are still in the works, but it’s really clear on smaller Mastodon instances.
Easy example would be like a tech or programming instance that strictly limits the creation of their local communities, e.g. programming.dev. Off the bat you know a lot of discussion there’s to do with programming, and in turn there’s a decent chance that many of the communities people follow through there may also be programming or tech related, so the all feed may have a largely tech/programming focus to it.
As time goes on, you may see more focused instances with stricter sign-ups specifically to ensure their all feed relates more to their community’s focus, but honestly probably not too many as people enjoy flexibility in their posting.
It’s like email. When you sign up for Gmail, you get all the Gmail features, use the Gmail website to access your email, and can send email to any other email, like Proton or Hotmail or whatever. But if Gmail goes down you can’t read or send any email.
With Lemmy, you can see communities from any other server, like lemmy.ml or tchncs.de. But some servers might have a different interface, slightly different features, and if your instance goes down you won’t be able to access log in unless you have an account somewhere else.
The extra cool thing is this extends beyond Lemmy. Some other social medias like Mastodon and Pixelfed communicate the same way that Lemmy does, they just look different. You can see Lemmy communities on Mastodon, and see Mastodon toots on Pixelfed.
A complete lack of documentation has made the whole process of converting to Lemmy a massive pain in the ass.
Another main problem is that it’s not working how it’s designed.
If an instance gets bogged down, or an instance is misconfigured, then data doesn’t always replicate. Comments go missing from certain instances, etc.
The most basic explanation I can give is that: yes, instances can communicate with each other, but the don’t share data automatically. A user from instance A must go interact with data from instance B by directly browsing to it via the correct URL string (instanceA.com/c/community@instanceB.com), and then interact with content in that community. Data from that specific community will then show up.
That’s a large part of the reason that smaller instances have partial data from larger ones. Their users haven’t interacted with enough communities outside their own instance.
If I’m on a different instance, but I access communities on lemmy.world, would being on a different instance actually make a difference in user experience? Isn’t that community hosted on lemmy.world still subject to overloading?
I believe that your own instance pulls the feed from the other instance, so you’re not actually browsing that other instance. If other users on your local instance are also subscribed to that particular community, then your local instance is already syncing the feed. Essentially, I believe that each federated instance replicates a copy of the other instances communities, if and when they communities are requested or subscribed to by a user on the local instance. Hope that makes sense, and if anyone has a better (or more accurate) explanation, please feel free to correct me.
Think that’s more or less correct, but regarding @drturtle@lemmy.world’s question about overloading, I think that it may affect folks even on other instances if Lemmy.world’s overloading affects its response time to other servers attempting to sync with it.
E.g. Lemmy.world is bogged down -> Lemm.ee tries to sync posts/comments from .world -> .world takes a longer time to fulfill the request -> Lemm.ee sees older posts/comments for awhile until .world catches up to requests.
Glad to hear I’m not totally off the mark. I wonder then if instance-to-instance transactions would cause less overall congestion than local user traffic in such cases.
For example, if there are 25,000 users spread across 5 instances (with some overlap in community participation), would the instance-to-instance transactions needed to facilitate these users result in less of a performance hit than having all 25,000 users on the same instance? I don’t know nearly enough about databases to make an educated guess.
The problem I found is that when I looked at lemmy.world content through lemmy.ca for example, it would not be up to date. Comments would be missing, upvotes and downvotes wouldn’t match, etc.
Yeah I found this issue as well. Although it actually seems to have improved in the last few weeks. Not sure what changed.
Pretty much every instance was having federation issues with lemmy.world due to their server just being overloaded. It’s significantly improved but I’m not sure if it’s 100% perfect yet.
I’ve found since 18.2 that Lemmy.ml is as reliable as ressot for me. Compared to June I’ve had no hanging, errors, or issues accessing pages.