Just thought I’d share this since it’s working for me at my home instance of federate.cc, even though it’s not documented in the Lemmy hosting guide.
The image server used by Lemmy, pict-rs, recently added support for object storage like Amazon S3, instead of serving images directly off the disk. This is potentially interesting to you because object storage is orders of magnitude cheaper than disk storage with a VM.
By way of example, I’m hosting my setup on Vultr, but this applies to say Digital Ocean or AWS as well. Going from a 50GB to a 100GB VM instance on Vultr will take you from $12 to $24/month. Up to 180GB, $48/month. Of course these include CPU and RAM step-ups too, but I’m focusing only on disk space for now.
Vultr’s object storage by comparison is $5/month for 1TB of storage and includes a separate 1TB of bandwidth that doesn’t count against your main VM, plus this content is served off of Vultr’s CDN instead of your instance, meaning even less CPU load for you.
This is pretty easy to do. What we’ll be doing is diverging slightly from the official Lemmy ansible setup to add some different environment variables to pict-rs.
After step 5, before running the ansible playbook, we’re going to modify the ansible template slightly:
cd templates/
cp docker-compose.yml docker-compose.yml.original
Now we’re going to edit the docker-compose.yml with your favourite text editor, personally I like micro
but vim
, emacs
, nano
or whatever will do…
favourite-editor docker-compose.yml
Down around line 67 begins the section for pictrs
, you’ll notice under the environment
section there are a bunch of things that the Lemmy guys predefined. We’re going to add some here to take advantage of the new support for object storage in pict-rs 0.4+:
At the bottom of the environment
section we’ll add these new vars:
- PICTRS__STORE__TYPE=object_storage
- PICTRS__STORE__ENDPOINT=Your Object Store Endpoint
- PICTRS__STORE__BUCKET_NAME=Your Bucket Name
- PICTRS__STORE__REGION=Your Bucket Region
- PICTRS__STORE__USE_PATH_STYLE=false
- PICTRS__STORE__ACCESS_KEY=Your Access Key
- PICTRS__STORE__SECRET_KEY=Your Secret Key
So your whole pictrs
section looks something like this: https://pastebin.com/X1dP1jew
The actual bucket name, region, access key and secret key will come from your provider. If you’re using Vultr like me then they are under the details after you’ve created your object store, under Overview -> S3 Credentials. On Vultr your endpoint will be something like sjc1.vultrobjects.com, and your region is the domain prefix, so in this case sjc1.
Now you can install as usual. If you have an existing instance already deployed, there is an additional migration command you have to run to move your on-disk images into the object storage.
You’re now good to go and things should pretty much behave like before, except pict-rs
will be saving images to your designated cloud/object store, and when serving images it will instead redirect clients to pull directly from the object store, saving you a lot of storage, cpu use and bandwidth, and therefore money.
Hope this helps someone, I am not an expert in either Lemmy administration nor Linux sysadmin stuff, but I can say I’ve done this on my own instance at federate.cc and so far I can’t see any ill effects.
Happy Lemmy-ing!
Commenting to bookmark this for future reference! Great write up. I’ll likely try this on my instance at some point.
Also testing to see if my comment shows up…
Comment seen!
Huzzah!
(copying my answer also in here from your other post)
Going from a 50GB to a 100GB VM instance on Vultr will take you from $12 to $24/month. Up to 180GB, $48/month
This is true if you’re only updating your VPS to the next tier, but this not only updates disk, also vCPU, Ram and bandwidth. So maybe you’re updating stuff you don’t need.
I still have my 1vCPU, 1GB RAM, 25GB disk for $6 plus 40GB of block storage for just $1 extra.
If I want to go up to a 100GB it’d be $2.50 and 181GB would be $4.53.
For $48/month you can get 1.9TB.I’m still not very familiar with object storage, so for something like pict-rs it could still be better/cheaper to use it, I just wanted to mention block storage can also be an option.
Block storage is also another option, but I believe it then becomes your problem to mount it, etc. You can think of object storage as a “cloud drive” that has its own web serving capabilities; you upload your files “somewhere” that now live in a cloud that you don’t have to manage, and that end users’ browsers can directly pull files from. it does tend to be cheaper as well. So some advantages there for our use case.
Ahh, got it.
Is object storage always public? Or can be restricted to just some network?
I mean for other use cases, for example for storing personal files like photos or notes I want to share with selected people.You can make the buckets require authentication and make a backend that checks permissions and generates a signed URL that will allow access to the specific item in the private bucket.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/example_s3_Scenario_PresignedUrl_section.html
It’s not always public, but it’s designed for use in a cloud application environment, so it’s not literally DropBox, but the metaphor is useful to understand it in that, it’s a remote store not accessible by the local file system / device explorer, and it lives on a server whose specifics you don’t know or care about.
I did this Wednesday for my instance. Migrating the existing content took some downtime, but now everything is running great. Thanks for the tip!
No problem!
Saved! That’s good to keep in mind for me, as it helps with distributing services. I am intending to run the instance on Kubernetes with helm, and I think this means I can have pict-rs working with minIO. Thanks!
I’ve got 50 or so users now and my pict-rs volume is only 4GB so far. I’ll save this tip for later perhaps but it doesn’t seem to be a pressing issue for most small instances, at least in terms of storage space.
But how old is your instance? (and when did the number of users or active users reach a stable-ish level?)
If it is only 2 weeks old, then that means that the need for storage will be growing by more than 8 Go / month. After a year, you shall need over 100 Go of storage.
Awesome. Thanks for sharing 🙏
Thank you for this! I’m going to give it a try.
I have been thinking about moving my backend storage to s3 as well. Saving this for later. Thanks!
You can also use better server providers that aren’t absolutely overpriced.