Does anyone run their own Lemmy instance on a pi? How was the process of setting it up? Were there any pitfalls? How is performance?
[Edit] So a lot of testing around. Compiling from scratch, etc, etc…
So far i have tried
- installing lemmy using rootless docker (on 0.17.3)
- compiling the image 0.18 docker image as arm
rootless docker did not work well for me. lots of systemd issues and i gave up after running into a lot of issues. I tried rootless docker for security reasons. minimal permissions, etc.
When trying to compile the latest lemmy image in arm, i ran into issues with muslrust not having an arm version. It might be worth investigating rewriting the docker file from 0.17.3 to work with 0.18.0 but i haven’t investigated that fully yet! I tried compiling the latest image because i wanted to be able to use the latest features
At the moment, I’m trying to set lemmy to run under bare metal. Im currently attempting to compile lemmy under arm. If that works, i’ll start setting up .service files to start up lemmy and pictrs.
I don’t think there should be any problems, lemmy is a fairly lightweight web application, it’s compiled so no big overhead of some runtime like ruby in case of mastodon. I haven’t tried it on a raspberry Pi, but on my server the load is always just around 0.1
The only bottleneck I could think of was Postgres, but I’ve been running postgres on raspberry pies without any problems before too.
Im looking at setting up a lemmy instance on a rpi3 with cloudflared tunnel! I’m curious to see if anyone else has done this and how it was.
Edit: I’ll give it a whirl and hopefully post an update from my new instance later!
Please don’t forget to give us updates on your adventure^^
I was able to get it setup, main things to watch out for:
- Don’t use the provided docker compose file. Or more precisely don’t build from source and lookup the correct image tag on docker hub first.
- The documentation was a bit confusing. This isn’t really specific for the Pi but since I was creating a compose file from scratch some of the steps listed didn’t quite explain all of the details.
I only used it for testing purposes, but performance was fine (on a Pi4 4gb). Note I only ever had one user.
As I only want to use it for myself as jump-off point (and to mess around a tad) I’m fine with performance on an RPi4 (have the 8 GB version), but I’m struggling to get it next to the rest in my Debian install on it.
Local install fails as I need imagemagick 7 (Debian still had 6.9), and it refuses to compile with imei method. (that script wants to use /usr/local/bin/identify which I think it needs to install itself (part of imagemagick) and the compose file I couldn’t get to work with an external (already hosted) postgres.
Any tips? I’m totally new with docker and ansible.
Removed by mod
You could plug in a USB SSD or HDD and make sure the DB and other regularly written data goes there. That would pretty much remove the problem.
I would wonder how well it would perform. The limited memory and cpu power surely would make database access not great under even moderate load.
Removed by mod
Hey OP, I’m on a similar journey (except I’m using an rpi kubernetes cluster)
I don’t have advice but I do want to wish you good luck
Here: my daily “simply a nice stranger” award goes to you
What user cap would a pi have running an instance?
Are you asking me what i plan to set the cap to? I guess just me. I cant see anyone else wanting to run off a pi from my house and there are so many other instances to join.
I’m a newbie here but what would be the benefit of running an instance just for yourself?
The ability to host your own data - both for privacy, and insurance that the instance you host your account in won’t suddenly disappear.
I would also add that Lemmy is part of the fediverse, meaning it is federated. Federation means all instances “talk” to all instances (unless they defederate), so you aren’t limited only to the content on one instance (or in some cases not even Lemmy, case in point: I’m posting this from my kbin.social account).
same!
What happens to posts/comments and any media/content that is hosted on a server that just goes away (for example if I created one virtually and then deleted it or if a sdcard on a pi is corrupted)
If you upload an image to that server, the image will be gone. Your comments will still exist on other federated instances, assuming that instance was federated in the first place. But any replies to those communities will not propagate once the hosting instance is offline.
For example assume you have 3 instances, A, B, and C. You have an account on A and create a post to a community on A. At some point A goes away, but those posts and that community will still exist on B and C. So you create a new account on B, and reply to one of those posts… users on C won’t be able to see those replies as A isn’t there to broadcast those replies out. And if someone on C creates a new post on that community from A, you wouldn’t be able to see it on B either.
P.S. The same is true if A just decides to defederate instead of shutting down. (other than the images and accounts would still exist obviously).
No, I meant what is the user limit based on the power of the raspberry pi tech specs.
Basically the limit would be the speed of the database and the drive it runs on. If you connect a SATA SSD via usb3 it shouldn’t be too bad. Can’t tell you exact figures but a few hundred users is probably ok if you don’t expect the site to be super responsive.
Thanks. Might be useful for there to be a table outling diffrent hardware configs and acceptable user loads as more people people consider creating instances.
its difficult because different users have different usage patterns.
for example, two users who never post and are never online at the same time really take no resources from each other. they are effectively “one” user.one user who posts 10gb of content a day, and is constantly posting would be equivalent to hundreds of “normal” users.
Yes, sure, didn’t want to complicate the question by adding that :)