I recognize this will vary depending on how much you self-host, so I’m curious about the range of experiences from the few self-hosted things to the many self-hosted things.
Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters AP WiFi Access Point DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network DNS Domain Name Service/System Git Popular version control system, primarily for code IP Internet Protocol LTS Long Term Support software version LXC Linux Containers NAS Network-Attached Storage RAID Redundant Array of Independent Disks for mass storage RPi Raspberry Pi brand of SBC SBC Single-Board Computer SSD Solid State Drive mass storage SSH Secure Shell for remote terminal access VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting)
[Thread #710 for this sub, first seen 24th Apr 2024, 20:25] [FAQ] [Full list] [Contact] [Source code]
I have just been round my small setup and run an OS update, took about an hour. That includes a reboot of a dedicated server with OVH.
a pi and mini PC at home, a dedi at OVH running 2 LXC and 5 qemu vms. All deb a mix of 11 and 12.
I spend Wednesday evenings checking what updates need installing, I get an email every week from newreleases.io with software updates and run Semaphore to check on OS updates.
Huge amounts of daily maintenance because I lack self control and keep changing things that were previously working.
highly recommend doing infrastructure-as-code, it makes it really easy to git commit and save a previously working state, so you can backtrack when something goes wrong
Ansible is great for this!
Got any decent guides on how to do it? I guess a docker compose file can do most of the work there, not sure about volume backups and other dependencies in the OS.
Sorry I replied to the parent comment, but check out Ansible
Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd
I get it, the inventory is just a list of all servers and PC you are trying to manage and the playbooks contain every step you would take if you would configure everything manually.
I’ll be honest when you first set it up it’s daunting but that’s the thing! You only need to do it once, then you can deploy and redeploy anything you have in minutes.
I have weekly backups of my VMs in Proxmox. Fuck it lol.
Nightly backups to a repurposed qnap running pbs. I’m fully aware it’s overkill but it gives me some peace of mind.
I opted weekly so I could store longer time periods. If I want to go a month back I just need 4 instead of 30. At least that was the main Idea. I’ve definitely realized I fucked something up weeks ago without noticing before lol.
I’ve got PBS setup to keep 7 daily backups and 4 weekly backups. I used to have it retaining multiple monthly backups but realized I never need those and since I sync my backups volume to B2 it was costing me $$.
What I need to do is shop around for a storage VM in the cloud that I could install PBS on. Then I could have more granular control over what’s synced instead the current all-or-nothing approach. I just don’t think I’m going to find something that comes in at B2 pricing and reliability.
Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.
Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that’s about it?
I’m a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I’d say it varies more with experience and how it’s set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn’t really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it’s all up and running. I don’t use that for my own servers out of laziness but still, I set most of that stuff 10 years ago and it’s still happily humming along just fine.
Same same - just one update a week on Friday btw 2 yawns of the 4VMs and 10-15 services i have + quarterly backup. Does not involve much + the odd ad-hoc re-linking the reverse proxy when containers switch ips on the docker network when the VM restarts/resets
+1 for docker and minimal maintenance. Only updates or new containers might break stuff. If you don’t touch it, it will be fine. Of course there might be some container specific problems. Depends what you want to run. And I’m not a devops engineer like Max 😅
New Lemmy Post: How much maintenance do you find your self-hosting involves? (https://lemmyverse.link/lemmy.world/post/14656240)
Tagging: #SelfHosted(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md
deleted by creator
Typically, very little. I have ~40 containers in my Docker stack and by in large it just works. I upgrade stuff here and there as needed. I am getting ready to do a hardware refresh but again with Docker that’s pretty painless.
Most of the time spent in my lab is trying out new things. I’ll find a new something that looks cool and go down the rabbit hole with it for a while. Then back to the status quo.
After my Nextcloud server just killed itself from an update and I ditched that junk software, nearly zero maintenance.
I have
- autoupdates on.
- daily borgbackups to hetzner storage box.
- auto snapshots of the servers and hetzer.
- cloud-init scripts ready for any of the servers.
- Xpipe for management
- keepass as a backup for all the ssh keys and password
And I have never used any of those … it just runs and keeps running.
I am selfhosting
- a website
- a booking service for me
- caldav server
- forgejo
- opengist
- jitsi
I need to setup some file sharing thing (Nextcloud replacement) but I am not sure what. My usecase is mainly 1) Archiving junk 2) syncing files between three devices 3) streaming my music collection
I moved form next cloud to seafile. The file sync is so much better than next cloud and own cloud.
It has a normal windows client and also a mount type client (seadrive) which is also amazing for large libraries.
I have mine setup with oAuth via Authentik and it works super well.
I actually moved from seafile to nextcloud, because when I have two PCs running simultaneously it would constantly have sync errors and required manually resolving them all the time. Sadly nextcloud wasn’t really better. But I am now looking for solutions that can avoid file conflicts with two simultaneous clients.
Are you changing the same files at the same time?
I have multiple computers syncing into the same library all the time without issue.
Are you changing the same files at the same time?
Rarely. But there is some offline laptop use compounded with slow sync times. (I was running it on a raspi with external usb hdd enclosure)
Either way, I’d like something less fragile. I’ll test seafile again sometime, thanks.
sometimes I remember I’m self hosting things
+1 automate your backup rolling, setup your monitoring and alerting and then ignore everything until something actually goes wrong. I touch my lab a handful of times a year when it’s time for major updates, otherwise it basically runs itself.
As long as you remember before you turn off the computer!
my main PC hosts nothing, everything else is always on
I don’t understand. “Turn… off?”
neofetch proudly displaying 5 months of uptime
Since scrapping systemd, a hell of a lot less but it can occasionally be a bit of messing about when my dynamic ip gets reassigned.
I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.
Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.
So -
- weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS’s, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
- Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
- From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They’re on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
- Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
- Yearly: visit the remotes and have a proper check/clean up/updates
Not much for myself, like many others. But my backups are manual. I have an external drive I backup to and unplug as I intentionally want to keep it completely isolated from the network in case of a breach. Because of that, maybe 10 minutes a week? Running gentoo with tons of scripts and docker containers that I have automatically updating. The only time I need to intervene the updates is when my script sends me a push notification of an eselect news item (like a major upcoming update) or kernel update.
I also use a custom monitoring software I wrote that ties into a MySQL db that’s connected to with grafana for general software, network alerts (new devices connecting to network, suspicious DNS requests, suspicious ports, suspicious countries being reached out to like china, etc) or hardware failures (like a raid drive failing)… So yeah, automate if you know how to script or program, and you’ll be pretty much worry free most of the time.
A lot less since I started using docker instead of running separate vms for everything. Less systems to update is bliss.
As others said, the initial setup may consume some time, but once it’s running, it just works. I dockerize almost everything and have automatic backups set up.
As a complete noob trying to make A TrueNAS server, none and then suddenly lots when idk how to fix something that broke