If you’re not paying for a service, you’re likely being monetized by watching ads or providing personal data to companies that don’t necessarily have your best interests at heart.
This is a bit out of date. Nowadays, you pay for the service and are monetized by watching ads and providing personal data to companies that definitely don’t have your best interests at heart.
People said it back then too. The ad and tracking industry will always invade more and more of our privacy. When will there be enough tracking to make them stop and be happy? Never. Never is the only answer.
Username checks out .
I recently decided to get more serious about self hosting and gotta say,
use TrueNAS scale, just do it, literally everything is 1 click… While it can be complicated, it is most definitely worth it, not just to stick it to big tech, but because some of the selfhosted apps genuinely provide a better experience than centralized alternatives. NextCloud surprised me especially with how genuinely nice it is. Installed it, got an SSL certificate and replaced google services almost entirely in a few hours of work.I’ve still got a few things I wanna do which look very complicated… Stuff like a mail server and pfsense (the stuff of nightmares) are among the 1st on my list…
OPNSense is generally pretty easy, more powerful, and more open than pfsense. I started with pf but went to OPNSense and have loved it!
I’ve tried both and both were hell
genuine advice, i recommend you get into the nitty gritty of linux someday.
Guis, especially complex guis are just hell on earth. Actually sitting down and learning about what you’re doing, and familiarizing yourself with the underlying tools, is an incredibly good way to get around that problem.
It’s really hard to fuck up a CLI, and it’s really easy with a certain level of knowledge, to navigate more complex topics and concepts. It’s very worthwhile.
I am very much into the nitty gritty of Linux (I use Alpine fyi) the problem is, pf/opnsense aren’t based on Linux…
And I also don’t really know how to set them up… Yk as routers, mainly because my internet comes through PPPoE and I just cannot for the life of me figure out how to pass that through to a VM. I bound the VM to its own NIC, did everything, did not work…
Honestly, I found it really easy. I don’t have a background in IT or anything either.
What did you find difficult? Setting custom firewall rules is harder to understand, but the general functionality of setting up a NAT and even installing and configuring ZenArmor were super super easy.
removed by mod
I already have my own network with stuff and things… it’s mostly just the simple stuff (TrueNAS scale, pihole, wireguard, nextcloud and other things like that). But yeah, outside my mac, I have literally 0 experience with BSD…
removed by mod
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network DNS Domain Name Service/System Git Popular version control system, primarily for code HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol NAS Network-Attached Storage NAT Network Address Translation NUC Next Unit of Computing brand of Intel small computers NVMe Non-Volatile Memory Express interface for mass storage Plex Brand of media server package RPi Raspberry Pi brand of SBC SBC Single-Board Computer SMB Server Message Block protocol for file and printer sharing; Windows-native SMTP Simple Mail Transfer Protocol SSL Secure Sockets Layer, for transparent encryption VPS Virtual Private Server (opposed to shared hosting) XMPP Extensible Messaging and Presence Protocol (‘Jabber’) for open instant messaging nginx Popular HTTP server
17 acronyms in this thread; the most compressed thread commented on today has 3 acronyms.
[Thread #773 for this sub, first seen 30th May 2024, 10:15] [FAQ] [Full list] [Contact] [Source code]
I do self-host some services but it bugs me that a lot of articles that talk about costs do not factor in a lot of additional costs. Drives for NAS need replacement. Running NUCs means quite an energy draw compared to most ARM based SBCs.
And it dismisses the time component of self hosting. It’s not going to be zero.
It really bugs me in general how often the term “home lab” is conflated with a “home server”, but in the context of what this article is trying to communicate, it’s only going to turn the more casually technical people it’s trying to appeal to off.
For many people, their home lab can also function as a server for self hosting things that aren’t meant to be permanent, but that’s not what a home lab is or is for. A home lab is a collection of hardware for experimenting and prototyping different processes and technologies. It’s not meant to be a permanent home for services and data. If the server in your house can’t be shut down and wiped at any given time without any disruption to or loss of data that’s important to you, then you don’t have a home lab.
Based on what I’ve seen, I’d also say a homelab is often needlessly complex compared to what I’d consider a sane approach to self hosting. You’ll throw all sorts of complexity to imitate the complexity of things you are asked to do professionally, that are either actually bad, but have hype/marketing, or may bring value, but only at scales beyond a household’s hosting needs and far simpler setups will suffice that are nearly 0 touch day to day.
Oh yeah like that’s part of it. If this article is supposed to be a call to action, somebody who starts looking into “homelabs” is going to get confused, they’ll get some sticker shock, and they won’t understand how they apply to what’s said in the article. They’ll see a mix of information from small home servers to hyperconverged infrastructure, banks of Cisco routers and switches, etc. my first home lab was a stack of old Cisco gear I used to study for my network engineering degree. If you stumbled upon an old post of mine talking about my setup and all you’re looking for is a Plex box you’ll be like “What the fuck is all this shit, I’m not trying to deal with all that”
“Self hosting”, and “home server” are just more accurate keywords to look into and actually see things more closely related to what you want.
Yep, and I see evidence of that over complication in some ‘getting started’ questions where people are asking about really convoluted design points and then people reinforcing that by doubling down or sometimes mentioning other weird exotic stuff, when they might be served by a checkbox in a ‘dumbed down’ self-hosting distribution on a single server, or maybe installing a package and just having it run, or maybe having to run a podman or docker command for some. But if they are struggling with complicated networking and scaling across a set of systems, then they are going way beyond what makes sense for a self host scenario.
removed by mod
Only if nothing on it is permanent. You can have a home lab where the things you’re testing are self hosted apps. But if the server in question is meant to be permanent, like if you’re backing up the data on it, or you’ve got it on a UPS you make sure it stays available, or you would be upset if somebody came by and accidentally unplugged it during the day, it’s not a home lab.
A home lab is an unimportant, transient environment meant for tinkering, prototyping, and breaking.
A box that’s a solution to something, that’s hosting anything you can’t get rid of at a moments notice, is just a home server.
I still use the label ‘homelab’ for everything in my house, including the production services. It’s just a convenient term and not something I’ve seen anyone split hairs about until now.
if nothing on it is permanent. You can have a home lab where the things you’re testing are self hosted apps. But if the server in question is meant to be permanent, like if you’re backing up the data on it, or you’ve got it on a UPS you make sure it stays available, or you would be upset if somebody came by and accidentally unplugged it during the day, it’s not a home lab.
A home lab is an unimportant, transient environment me
i don’t know if i really like that definition. Going by the definition of a laboratory, it doesn’t really make much sense. I mean sure they’re a sterile environment, but it’s incredibly unlikely that a lab is wiped clean and built from scratch, unless you get millions of dollars, and a lot of free time, i guess.
A lab is merely a place to do work with regard to studying, learning, or improving something.
People often refer to their “homelab” as an entire server rack, you want me to believe that people are willing to wheel out their entire server rack and discard the entire fucking thing? I doubt it. A homelab is just a collection of gear, (usually commercial networking gear) intended for providing an environment for you to mess around with things and learn about stuff.
In some capacity a homelab has to be semi permanent, if not for anything other than actually testing reliability and functionality of services and hardware, for the actual services themselves, because a part of the lab, is the service itself.
removed by mod
removed by mod
removed by mod
removed by mod
I do host some stuff myself 😉 but there’s one thing to keep in mind.
Don’t self host stuff that your family still needs after you’re gone. Unless they are self host nerds like you. I stopped self hosting our mail and docs for example.
Would you agree?
…Happy cake day?
I wasn’t aware it was on Lemmy too.
I agree, and I think there’s some reliability arguments for certain services, too.
I’ve been using self-hosted Bitwarden. That’s something I really want to be reliable anywhere I happen to be. I don’t want to rely on my home Internet connection always being up and dyn DNS always matching. An AWS instance or something like that which can handle Bitwarden would be around $20/month (it’s kinda heavy on RAM). Bitwarden’s own hosting is only $3.33/month for a family plan.
Yes, Bitwarden can work with its local cache only, but I don’t like not being able to sync everything. It’s potentially too important to leave to a residential-level Internet connection.
Is your home connection down that much? I’d think that even syncing once every day or so would populate everything fine, and if you’re at home it should update over wifi.
I might just be spoiled because I’m the only one using mine and only for a handful of devices.
Not really, I just have trust issues with my ISP, and I’m willing to spend three bucks a month to work around them.
I’d agree but you can expand this quite widely then. You think they don’t need their pictures anymore, in case you host something like Immich/Photoprism? If you host movies, series, games, they may not need them anymore but it would still be noticeable that they are not accessible anymore.
Not that I am saying you are wrong or what a good way of doing that would be. I don’t know myself.
Ideally you want something that gracefully degrades.
So, my media library is hosted by Plex/Jellyfin and a bunch of complex firewall and reverse proxy stuff… And it’s replicated using Syncthing. But at the end of the day it’s on an external HDD that they can plug into a regular old laptop and browse on pretty much any OS.
Same story for old family photos (Photoprism, indexing a directory tree on a Synology NAS) and regular files (mostly just direct SMB mounts on the same NAS).
Backups are a bit more complex, but I also have fairly detailed disaster recovery plans that explain how to decrypt/restore backups and access admin functions, if I’m not available (in the grim scenario, dead - but also maybe just overseas or otherwise indisposed) when something bad happens.
Aside from that, I always make sure that all of all the selfhosting stuff in my family home is entirely separate from the network infra. No DNS, DHCP or anything else ever runs on my hosting infra.
I’m tired of the argument that the solution to fight tracking/ads/subscription/gafam is self hosting.
It’s a solution for some nice people that have knowledge, time and money for.
But it’s not a solution for everyone.
We need more small nice open source association and company that provide services for people that don’t know the difference between a web search engine and a navigator or just a server and a client. I think that initiatives like “les chatons” in France are amazing for that!!! ( https://www.chatons.org/en )And just to be clear, I think that self-hosted services are a part of the solution. :)
I’m hoping my makerspace will be able to do something like that in the future. We’d need funding for a much bigger internet connection, at least three full time systems people paid market wages and benefits (three because they deserve to go on vacation while we maintain a reasonable level of reliability), and also space for a couple of server racks. Equipment itself is pretty cheap–tons of used servers on eBay are out there–but monthly costs are not.
It’s a lot, but I think we could pull it off a few years from now if we can find the right funding sources. Hopefully can be self-funding in the long run with reasonable monthly fees.
Agreed. Most people online think having a personal website on their own domain is too much of a hassle, they won’t have the knowledge or time to setup a homelab server.
We need more of the nice people you mention — with the tech knowhow and surplus of time — to maintain community services as alternatives to corporate platforms. I see a few co-op services around where member-owners pay a fee to have access to cloud storage and social platforms; that is one way to ensure the basic upkeep of such a community. I’m not sure how Chatons is financed but they certainly have a wide range of libre and private offerings!
IIRC, it’s nearly impossible to self-host email anymore, unless you have a long established domain already. Gmail will tend to mark you as spam if you’re sending from a new domain. Since they dominate email, you’re stuck with their rules. The only way to get on the good boy list is to host on Google Workspace or another established service like Protonmail.
That’s on top of the fact that correctly configuring an email server has always been a PITA. More so if you want to avoid being a spam gateway.
We need something better than email.
We need something better than email.
Say everyone agrees and the entire world swaps to some alternative. Email 3.0 or whatever.
Wouldn’t we just have the same issue? Any form of communication protocol (that can be self host able) will get abused by spam. Requiring a lot of extra work to manage.
Setting up a web of trust could cut out almost all spam. Of course, getting most people to manage their trust in a network is difficult, to say the least. The only other solution has been walled gardens like Facebook or Discord, and I don’t have to tell anyone around here about the problems with those.
Isn’t the current email system kind of a web of trust. Microsoft, Google etc… trust each other. But little me and my home server is not part of that web of trust making my email server get blocked.
Yeah, that’s kinda what my GP post was getting at. But it’s all managed by corporations, not individuals.
Realistically I don’t see how it would ever not be managed by a corporation. Your average person doesn’t know how and doesn’t want to manage their own messaging system. They are just going to offload that responsibility to a corporation to do it for them. We are just going to have exactly the same system we have now. Just called some else besides email.
I wish there was a better solution but I am not seeing a way that doesn’t just end up the same as email.
Well, there’s always, you know, mail.
Aah, the good ol‘ wooden variety
I self-host mine using Mailcow, but I use an outbound SMTP relay for sending email so I don’t have to deal with IP reputation. L
On top of that, most ISPs block port 25 on residential IP addresses to combat spam, making it impossible to go full ”DIY”
All of these types are articles always leave out the calculations of what your time is worth to you and the maintenance costs of spare hard drives and other equipment. The TCO is not just the initial investment in hardware/software alone. Unless you plan to host something unreliably and value your time at nothing. In which case I hope you don’t get friends or family hooked on your stuff or everyone will have a bad time and be back to Google Drive/Docs and Netflix within 5 years.
The reason they leave it out I feel is because once you factor all of that stuff in the $10/month your paying for Google Drive storage or the ~$25 your paying Netflix starts to make a lot more sense when pared with a decent local backup from a Synology NAS for the “I can’t lose this” stuff like baby pictures of your kids. Which blows their entire premise out of the water.
I self host a lot, but I host a lot on cheap VPS’s, mostly, in addition to the few services on local hardware.
However, these also don’t take into account the amount of time and money to maintain these networks and equipment. Residential electricity isn’t cheap; internet access isn’t cheap, especially if you have to get business class Internet to get upload speeds over 10 or 15 mbps or to avoid TOS breaches of running what they consider commercial services even if it’s just for you, mostly because of of cable company monopolies; cooling the hardware, especially if you live in a hotter climate, isn’t cheap; and maintaining the hardware and OS, upgrades, offsite backups for disaster recovery, and all of the other costs. For me, VPS’s work, but for others maintaining the OS and software is too much time to put in. And just figuring out what software to host and then how to set it up and properly secure it takes a ton of time.
Residential electricity isn’t cheap
This is a point many folks don’t take into account. My average per Kwh cost right now is $0.41 (yes, California, yay). So it costs me almost $400 per year just to have some older hardware running 24x7
Omg, I pay 30€ for 1Gb/0.7Gb (ten more for symmetrical 10Gb, I don’t need it and can’t even use more than 1Gb/s but my inner nerd wants it) and 0.15€/KWh.
BTW the electricity cost is somewhat or totally negated when you heat your apartment/house depending on your heating system. For me in the winter I totally write it off.
I solved this by installing solar panels. They produce more electricity than I need (enough to cover charging an EV in when I get one in the future), and I should break even (in terms of cost) within 5-6 years of installation. Had them installed last year under NEM 2.0.
I know PG&E want to introduce a fixed monthly fee at some point, which throws off my break-even calculations a bit.
Some VPS providers have good deals and you can often find systems with 16GB RAM and NVMe drives for around $70-100/year during LowEndTalk Black Friday sales, so it’s definitely worth considering if your use cases can be better handled by a VPS. I have both - a home server for things like photos, music, and security camera footage, and VPSes for things that need to be reliable and up 100% of the time (websites, email, etc)
This sounds excessive, that’s almost 1.1$/day, amounting to more than 2kWh/24hrs, ie ~80W/hr? You will need to invest in a TDP friendly build. I’m running a AMD APU (known for shitty idle consumption) with Raid 5 and still hover less than 40W/h.
This sounds excessive, that’s almost 1.1$/day, amounting to more than 2kWh/24hrs, ie ~80W/hr? You will need to invest in a TDP friendly build. I’m running a AMD APU (known for shitty idle consumption) with Raid 5 and still hover less than 40W/h.
This isn’t speculation on my part, I measured the consumption with a Kill-a-watt. It’s an 11 year old PC with 4 hard drives and multiple fans because it’s in a hot environment and hard drive usage is significant because it’s running security camera software in a virtual machine. Host OS is Linux MInt. It averages right around 110w. I’m fully aware that’s very high relative to something purpose built.
You will need to invest in a TDP friendly build
Right, and spend even more money.
I think the main culprit is CPU/MB, so that’s the only thing needed a replacement. Many cheap alternatives (less than 200$) that can half the consumption and would pay itself in a year of usage easily. There is a Google doc floating around listing all the efficient CPUs and their TDPs. Just a suggestion, I’m pretty sure after a year it would payoff its price, there is absolutely no need for a 110w/h unless you’re running LLMs on that and even then it shouldn’t be that high.
deleted by creator
I think it’s so people here can give themselves a pat on the pack for self hosting lol.
Like how the Linux Lemmy community has so many “Windows is bad, Linux is good” posts. Practically everyone in there already knows that Linux is good.
Welcome to the internet, where people try their best to find people with the same opinions so they can feel good and get pissed when they can’t.
I self host mail/smtp(opensmtpd)+imap(dovecot), znc (irc bouncer), ssh, vpn (ipsec/ikev2), www/http (httpd), git (git-daemon), and gotweb, on an extremely cheap ($2 a month, 512M ram 10G storage) vps all very easily on openbsd. With all these servers I’m using an immense 178M/512M of my available memory.
what VPS provider are you using?
buyvm/frantech
I have similar specs and cost with ionos
Someday I hope we have a server technology that’s platform-agnostic and you can just add things like “Minecraft Server” or “Email Server” to a list and it’ll install, configure, and host everything in the list with a sensible default config. I imagine you could make the technology fairly easily, although keeping up with new services, versions, security updates, etc. would be quite the hassle. But that’s what collaboration is for!
As someone who has had a career in hosting: good luck.
Don’t forget backups, logging, monitoring, alerting on top of security updates, hardware failure, power outages, OS updates, app updates, and tech being deprecated and obsolete at a rapid pace.
I’m in favor of a decentralized net with more self-hosting, but that requires more education and skill. You can’t automate away all the unpleasant and technical bits.
But if we hide the complexity, surely we won’t ever have to deal with it! /s
You can’t automate away all the unpleasant and technical bits.
But it’s our job to try
Honestly at this point that is docker and docker compose.
As to what to run it on that very much depends on preference. I use a proxmox server but it could just as easily be pure Debian. A basic webui like cockpit can make system management operations a bit more simplified.
Docker is in theory nice, if it works. Docker doesn’t run on my computer(i have no fucking clue why). Every time I try to do anything I get the Error “Unknown Server: OS” also there is literally nothing you can find online about how to Fux this problem.
What computer and OS do you have that can’t run docker? You can run a full stack of services on a random windows laptop as easily as a dedicated server.
Edit
Autocorrect messing with OS.
I use EndeavourOS, but had the same problem on Arch.
Hardware wise I have an 75800x, a RX 6700XT and 32GB 3200mhz Ram.
The weird thing is, that some time ago I was actually able to use docker, but now I’m not.
That doesn’t make any sense to me. It can be installed directly from pacman. It may be something silly like adding docker to your user group. Have you done something like below for docker?
- Update the package index:
sudo pacman -Syu
- Install required dependencies:
sudo pacman -S docker
- Enable and start the Docker service:
sudo systemctl enable docker.service sudo systemctl start docker.service
- Add your user to the docker group to run Docker commands without sudo:
sudo usermod -aG docker $USER
-
Log out and log back in for the group changes to take effect.
Verify that Docker CE is installed correctly by running:
docker --version
If you get the above working docker compose is just
sudo pacman -S docker-compose
I didnt start docker and didn’t add it to my user group. Maybe this will fix it.
sudo pacman -S docker-compose
I did all the steps you mentioned and now it works(at least if use sudo to run the commands).
I thought it would. If it still requires sudo to run it is probably just docker wanting your user account added to the docker group. If the “docker” group doesn’t exist you can safely create it.
You will likely need to log out and log back in for the system to recognize the new group permissions.
Cloudron does that,not for free, though. But cheap
Sounds kinda like NixOS, although that’s not platform-agnostic.
Funnily enough I do use NixOS for my server! It’s not quite what I was describing but it does allow me to host easily.
deleted by creator
Neat!
…is as mod by Vaskii
Unraid does this via docker. It’s amazing. You can do this live and on the fly.
I get that. And I self host the things I care about. But for the average layman? I don’t see self hosting as a real option. Unless you are decently tech savvy, and have an aptitude for troubleshooting, most people aren’t gonna put in the time or effort of initial setup. Even if maintenance is minimal once it’s running. That first leap into self-hosted is daunting.
I think of it this way… would I expect my dad to be able to do it? Absolutely not. And my dad is decently tech savvy for 70.
The first step is normalising the idea of privacy so people can even see the point of paying for something they can easily get for free.
The next step would be to make products people can easily use without being tech savvy. A synology NAS has been great for me and I praise the setup to anyone who will listen, but even with something like Synology people will need some basic knowledge.
YunoHost is trying to make it easier than a synology NAS to install services and get them setup properly but I agree that to configure your network properly is difficult and everyone’s setup is different so specific knowledge is required.
Yeah yunohost is pretty great for less than 10 users. Perhaps more depending on the service. Its very easy to get setup in a weekend with a plethora of services. And its pretty stable.
You are correct! That first leap into self hosting was a doozy! No regrets now tho ¯_(ツ)_/¯
I think you dropped this: \
That first leap into self-hosted is daunting.
the first leap you take into anything is daunting.
This is just called complacency. You can literally just pick up whatever the fuck you want, and start learning it.
Don’t forget that self hosting without proper knowledge is more dangerous than just giving away data to the big techs!
Trying to run your own nextcloud be like
Nextcloud was somewhat difficult for me the first time I installed it, though I did have a usable system in the end. Then I discovered Nextcloud AIO and haven’t had an issue since.
I don’t get this counter-argument. Is TFA actually suggesting that the average grandma quit using Yahoo mail or Facebook and set up her own email server and mastodon instance? The only people even considering self-hosting are people with technology interest and reasonable passion. It’s an article written for a niche techie website, and we’re discussing it on a forum for self-hosting nerds.
The counter-argument is like saying the average layman should stick to televised football, because they don’t have the physical savvy or aptitude for the game, and most people aren’t gonna put in the time or effort to build their strength & endurance to compete. It may be an accurate statement, but the people you’re addressing (grandma) weren’t TFA’s target audience and weren’t even going to try in the first place, and you discourage people who might really enjoy giving the hobby a try.
And here’s the reason why layman should not: they’re much more likely to make that one wrong move and suffer irrecoverable data loss than some faceless corporation selling their data.
At the end of the day, those of us who are technical enough will take the risk and learn, but for vast majority of the people, it is and will continue to remain as a non starter for the foreseeable future.
Not to mention, few people have the time, skill, money, and energy to do it. They’re happy to outsource in exchange for money and/or data.
There are actually easy solutions out there. For example CasaOS, it’s a oneliner and you get a docker orchestration with an app-store and built-in file and smb management. I bet even non technicals could use this.
The “layman” should fall back to old ways. Think local photo management with maybe some backup software
So just because they don’t know technology like you do, they should be left behind the times instead of taking advantage of advancements? A bit elitist and gate keeping there, don’t you think?
Everyone have their own choices to make, and for most, they’ve already decided they’d rather benefit from advancements than care about what you care about.
I think they should do what they know. Asking them to try to learn new things when they don’t enjoy it is not fun
With that being said, if they have the drive to spend time on it let them
And here’s the reason why layman should not: they’re much more likely to make that one wrong move and suffer irrecoverable data loss than some faceless corporation selling their data.
and yet americans still drive cars.
I don’t disagree, but you just have to be aware that you can fuck shit up. And if you do, that’s not my problem, or anybody elses at the end of the day.