So, I am thinking about getting myself a NAS to host mainly Immich and Plex. Got a couple of questions for the experienced folk;

  • Is Synology the best/easiest way to start? If not, what are the closest alternatives?
  • What OS should i go for? OMV, Synology’s OS, or UNRAID?
  • Mainly gonna host Plex/Jellyfin, and Synology Photos/Immich - not decided quite what solutions to go for.

Appricate any tips :sparkles:

  • @PuppyOSAndCoffee@lemmy.ml
    link
    fedilink
    English
    22 years ago

    A NAS serves data to clients; I know this is tilting conventional wisdom on it’s head but hear me out: go for the most inexpensive, lowest power storge-only-NAS that you can tolerate, and instead…put your money into your data transport (network) and into your clients..

    As much as possible, simplify your life - move processing out of middle tiers, into client tiers.

  • @Haphazard9479@lemm.ee
    link
    fedilink
    English
    22 years ago

    I have a qnap. I have had no issues. It runs its own qts OS so no need to figure out what you want to run. Make sure the hardware is x86. Plex runs better on x86.

  • Corgana
    link
    fedilink
    English
    2
    edit-2
    2 years ago

    I’ve found CasaOS to be the simplest to set up and get going. I tried TrueNAS for a year, but wish I had started with CasaOS.

      • Corgana
        link
        fedilink
        English
        12 years ago

        Haven’t tried OMV, but the lesson I learned with TrueNAS is that software designed primarily for NAS has a lot of features I don’t care about, and the other apps can be finicky. I’m not storing petabytes of data. CasaOS was the closest I found to “just works”.

        There’s also Umbrel OS which looks promising, but I’ve been happy with CasaOS so haven’t felt the need to switch.

  • @talentedkiwi@sh.itjust.works
    link
    fedilink
    English
    132 years ago

    I have proxmox on bare metal, an HBA card to passthrough to TrueNAS Scale. I’ve had good luck with this setup.

    The HBA card is to passthrough to TrueNAS so it can get direct control of the drives for ZFS. I got mine on eBay.

    I’m running proxmox so that I can separate some of my processes (e.g. plex LXC) into a different VM.

    • @InformalTrifle@lemmy.world
      link
      fedilink
      English
      22 years ago

      I’d love to find out more about this setup. Do you know of any blogs/wikis explaining that? Are you separating the storage from the compute with the HBA card?

      • Yote.zip
        link
        fedilink
        English
        32 years ago

        This is a fairly common setup and it’s not too complex - learning more about Proxmox and TrueNAS/ZFS individually will probably be easiest.

        Usually:

        • Proxmox on bare metal

        • TrueNAS Core/Scale in a VM

        • Pass the HBA PCI card through to TrueNAS and set up your ZFS pool there

        • If you run your app stack through Docker, set up a minimal Debian/Alpine host VM (you can technically use Docker under an LXC but experienced people keep saying it causes problems eventually and I’ll take their word for it)

        • If you run your app stack through LXCs, just set them up through Proxmox normally

        • Set up an NFS share through TrueNAS, and connect your app stack to that NFS share

        • (Optional): Just run your ZFS pool on Proxmox itself and skip TrueNAS

        • @talentedkiwi@sh.itjust.works
          link
          fedilink
          English
          22 years ago

          This is 100% my experience and setup. (Though I run Debian for my docker VM)

          I did run docker in an LXC but ran into some weird permission issues that shouldn’t have existed. Ran it again in VM and no issues with the same setup. Decided to keep it that way.

          I do run my plex and jellyfin on an LXC tough. No issues with that so far.

        • @InformalTrifle@lemmy.world
          link
          fedilink
          English
          22 years ago

          I already run proxmox but not TrueNAS. I’m really just confused about the HBA card. Probably a stupid question but why can’t TrueNAS access regular drives connected to SATA?

          • Yote.zip
            link
            fedilink
            English
            12 years ago

            The main problem is just getting TrueNAS access to the physical disks via IOMMU groups and passthrough. HBA cards are a super easy way to get a dedicated IOMMU group that has all your drives attached, so it’s common for people to use them in these sorts of setups. If you can pull your normal SATA controller down into the TrueNAS VM without messing anything else up on the host layer, it will work the same way as an HBA card for all TrueNAS cares.

            (TMK, SATA controller hubs are usually an all-at-once passthrough, so if you have your host system running off some part of this controller it probably won’t work to unhook it from the host and give it to the guest.)

        • rentar42
          link
          fedilink
          22 years ago

          So theoretically if someone has alrady set up their NAS (custom Debian with ZFS root instead of TrueNAS, but shouldn’t matter), it sounds like it should be relatively straightforward to migrate all of that into a Proxmox VM, by installing Proxmox “under it”, right? Only thing I’d need right now is some SSD for Proxmox itself.

          • Yote.zip
            link
            fedilink
            English
            0
            edit-2
            2 years ago

            Proxmox would be the host on bare metal, with your current install as a VM under that. I’m not sure how to migrate an existing real install into a VM so it might require backing up configs and reinstalling.

            You shouldn’t need any extra hardware in theory, as Proxmox will let you split up the space on a drive to give to guest VMs.

            (I’m probably misunderstanding what you’re trying to do?)

            • rentar42
              link
              fedilink
              12 years ago

              I just thought that if all storage can easily be “passed through” to a VM then it should in theory be very simple to boot the existing installation in a VM directly.

              Regarding the extra storage: sharing disk space between proxmox and my current installation would imply that I have to pass-through “half of a drive” which I don’t think works like that. Also, I’m using ZFS for my OS disk and I don’t feel comformtable trying to figure out if I can easily resize those partitions without breaking anything ;-)

              • Yote.zip
                link
                fedilink
                English
                02 years ago

                That should work, but I don’t have experience with it. In that case yeah you’d need another separate drive to store Proxmox on.

    • jevans ⁂
      link
      fedilink
      English
      52 years ago

      This is a great way to set this up. I’m moving over to this in a few days. I have a temporary setup with ZFS directly on Proxmox with an OMV VM for handling shares bc my B450 motherboard IOMMU groups won’t let me pass through my GPU and an HBA to separate VMs (note for OP: if you cannot pass through your HBA to a VM, this setup is not a good idea). I ordered an ASRock X570 Phantom Gaming motherboard as a replacement ($110 on Amazon right now. It’s a great deal.) that will have more separate IOMMU groups.

      My old setup was similar but used ESXi instead of Proxmox. I also went nuts and virtualized pfSense on the same PC. It was surprisingly stable, but I’m keeping my gateway on a separate PC from now on.

      • Yote.zip
        link
        fedilink
        English
        32 years ago

        If you can’t pass through your HBA to a VM, feel free to manage ZFS through Proxmox instead (CLI or with something like Cockpit). While TrueNAS is a nice GUI for ZFS, if it’s getting in the way you really don’t need it.

        • jevans ⁂
          link
          fedilink
          English
          32 years ago

          TrueNAS has nice defaults for managing snapshots and the like that make it a bit safer, but yeah, as I said, I run ZFS directly on Proxmox right now.

          • Yote.zip
            link
            fedilink
            English
            12 years ago

            Oh sorry for some reason I read OMV VM and assumed the ZFS pool was set up there. The Cockpit ZFS Manager extension that I linked has good management of snapshots as well, which may be sufficient depending on how much power you need.

    • @Scrath@feddit.de
      link
      fedilink
      English
      42 years ago

      Can definitely confirm this. I started with a Proxmox system which had a TrueNAS VM. TrueNAS just used a USB HDD for storage though. Setting everything up and getting the permissions set correctly so I could connect my computers was a pain in the ass though.

      Later I bought a synology and it just works. Only thing I would recommend is getting good HDDs. I bought Toshiba MG08 16TB drives and while they work great, they are obnoxiously loud during read and write operations. They are so loud, that even though the NAS is in a separate room I have to shut it off at night.

      Meanwhile the Seagate Ironwolf drive I used for TrueNAS was next to my bed for multiple months and was basically silent.

  • Dark Arc
    link
    fedilink
    English
    82 years ago

    TrueNAS Scale is a pretty easy to use option (based on Debian) backed by the excellent ZFS file system.

      • Dark Arc
        link
        fedilink
        English
        1
        edit-2
        2 years ago

        Eh… TrueNAS UI basically takes care of any zfs learning curve. The main thing I’d note is that RAID 5 & 6 can’t currently be expanded incrementally. So you either need to use mirroring, configure the system upfront to be as big as you expect you’ll need for years to come, or use smaller RAID 5 sets of disk (e.g. create 2 raid 5 volumes with 3 disks each instead of 1 RAID 5 volume with 6 disks).

        Not sure what you’re referring to as an easy backup option that zfs excludes, but maybe I’m just ignorant 🙂

      • rentar42
        link
        fedilink
        12 years ago

        I agree with the learning curve (personally I found it worthwhile, but that’s subjective).

        But how does ZFS limit easy backup options? IMO it only adds options (like zfs send/receive) but any backup solution that works with any other file systems should work just as well with ZFS (potentially better since you can use snapshots to make sure any backup is internally consistent).

        • @cyberpunk007@lemmy.world
          link
          fedilink
          English
          12 years ago

          Because you can’t use typical back product software. If you do it the right way, you’re using my ZFS send and receive to another machine running ZFS which significantly adds to cost.

          • rentar42
            link
            fedilink
            12 years ago

            That’s an extremely silly reason not to use a specific tool: Tool A provides an alternative way to do X, but I want to do X with some other tool B (that’ll also work with tool A), so I won’t be using tool A.

            Send/receive may or may not be the right answer for backing up even on ZFS, depending on what exactly you want to achieve. It’s really nice when it is what you want, but it’s no panacea (and certainly no reason to avoid ZFS, since its use it 100% optional).

            • @cyberpunk007@lemmy.world
              link
              fedilink
              English
              12 years ago

              I really don’t get your meaning of my apparent silly reason. You can’t use Acronis, Veeam, or other typical backup products with ZFS. My point is this is a barrier to entry. I disagree that it’s not silly for a home user to build another expensive NAS just to do ZFS send and receive which would be the proper way.

              I don’t consider backups optional.

  • @Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    2
    edit-2
    2 years ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    ESXi VMWare virtual machine hypervisor
    LXC Linux Containers
    NAS Network-Attached Storage
    Plex Brand of media server package
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage
    SSH Secure Shell for remote terminal access
    k8s Kubernetes container management package

    [Thread #164 for this sub, first seen 24th Sep 2023, 20:25] [FAQ] [Full list] [Contact] [Source code]

    • @Molecular0079@lemmy.world
      link
      fedilink
      English
      22 years ago

      I got burned pretty bad by QNAP. Their TS-453 Pro had an Intel manufacturing defect that basically caused it to die prematurely and QNAP has basically given up all responsibility for it. I built my own NAS after that experience.

      • sharpiemarker
        link
        fedilink
        English
        12 years ago

        Yep, I had the same with my TS-453, but mine was second hand. Ended up buying a new QNAP NAS and being very happy with it.

  • @ebits21@lemmy.ca
    link
    fedilink
    English
    92 years ago

    My Synogy NAS was super easy to set up and has been very solid. Very happy with it. I’m sure there’s other solutions though.

    • @thirdBreakfast@lemmy.world
      link
      fedilink
      English
      32 years ago

      This was the route I went with when I started, and I’ve never had cause to regret it. For people near the start of their self-hosting journey, it’s the no-hassle, reliable choice.

  • @dartanjinn@lemm.ee
    link
    fedilink
    English
    12 years ago

    ZimaBoard 832 with two 2TB SSDs and OMV is my setup. Pair it with tailscale for availability wherever you go.

    I wasn’t a fan of Immich. Although I’m trying to replace Google photos soy opinion is a bit skewed.

  • @thoughtorgan@lemmy.world
    link
    fedilink
    English
    22 years ago

    Unraid is great. Don’t let the FOSS heads say otherwise.

    I paid $100 3 years ago, ONCE. Best purchase I’ve ever made.

    I’ve tried the foss alternatives after getting familiar with unraid, and I still prefer unraid.

    • shastaxc
      link
      fedilink
      English
      12 years ago

      Seconded. But for more details… it’s great because you can throw in many different drives of different sizes, unlike RAID servers where every drive has to be the same size. You can also specify however much you want to use as parity (backup) drives.

      It has a nice web interface that you can access from any other PC on your LAN. I also have mine set up with Unraid Connect which allows me to access it from the open web also. It has a strong password and 2FA so I’m not concerned about security.

      It also makes it easy to serve Docker containers and full blown VMs. You can set them up right in the UI, or you can also SSH to it and use it as a normal Linux OS if you’re a power user. The web UI also has a button that’ll launch a SSH terminal in a separate window too.

      You can just use it as a NAS if you want, but Unraid makes it easy to expand your capabilities if you later feel like it. For example, you are only a few button clicks away from running Jellyfin to provide a nice UI for all your media files that you may be storing on your NAS.

  • @hassanmckusick@lemmy.discothe.quest
    link
    fedilink
    English
    02 years ago

    I’m a big fan of unraid but I will admit it’s overkill for a simple media server.

    A synology NAS should be plenty powerful enough for most streaming needs so long as you’re willing to let your media transcode first and you’re not streaming to too many devices at once.

    I use my unraid NAS to run sonar/radarr/readarr/prowlarr, stable diffusion, myjdownloader, a few vms and at one point even my lemmy instance. But honestly aside from stable diffusion and the VMs a synology NAS should have enough power to run a handful of other apps in addition to plex/jellyfin

  • @jws_shadotak@sh.itjust.works
    link
    fedilink
    English
    152 years ago

    Synology is generally a great option if you can afford the premium.

    Unraid is a good alternative for the poor man. Check this list of cases to build in. I personally have a Fractal R5 which can support up to 13 HDD slots.

    Unraid is generally a better bang for your buck imo. It’s got great support from the community.

  • Synapse
    link
    fedilink
    English
    252 years ago

    If you want a “setup and forget” type of experience, synology will serve you well, if you can afford it. Of you are more of a tinkerer and see yourself experimenting and upgrading in the future, then I recommend custom built. OMV is a solid OS for a novice, but any Linux distro you fancy most can do the job very well!

    I’ve started my NAS journey with a very humble 1-bay synology. For the last few years I am using a custom built ARM NAS (nanopi m4v2), with 4-bays and running Armbian. All my services run on docker, I have Jellyfin, *arr, bitwarden and several other servicies running very reliably.

    • @entropicdrift@lemmy.sdf.org
      link
      fedilink
      English
      2
      edit-2
      2 years ago

      ^ This. I have an M1 Mac mini running Asahi Linux with a bunch of docker containers and it works great. Run Jellyfin off of a separate stick PC running an Intel Celeron with Ubuntu Mate on it. Basically I just have docker compose files on those two machines and occasionally ssh in from my phone to sudo apt update && sudo apt upgrade -y (on Ubuntu) or sudo pacman -Syu (on Asahi) and then docker compose pull && docker compose up -d

    • @redballooon@lemm.ee
      link
      fedilink
      English
      42 years ago

      And if you’re not sure how much of tinkering you want to do a Synology with docker support is a good option.

  • Banthex
    link
    fedilink
    English
    12 years ago

    I use xpenology on my old gaming rig as server (no GPU). And i love it. Had unraid before was also very good but diffrent. My main usage is to store Family Files and Photos and the best software ist Synology fotos for me.