• @lambda_notation@lemmy.ml
    link
    fedilink
    English
    1
    edit-2
    2 years ago

    jfs+lvm on personal machines, ceph on storage, I used to use zfs on solaris and freebsd but haven’t touched it in linux (there isn’t any good reason, and I prefer to not break licensing)

  • @vext01@lemmy.sdf.org
    link
    fedilink
    English
    22 years ago

    Zfs on freebsd file server, for the error checking, error correction and flexibility.

    FFS everywhere else, because I’m an OpenBSD guy. I don’t love FFS, but it works.

  • surfrock66
    link
    fedilink
    English
    2
    edit-2
    2 years ago

    For most systems, ext4 because it seems stable and uncomplicated.

    For my NAS and big data, ZFS. People whose opinions I trust recommend it, and to the best of my technical ability to evaluate said things, the claims make sense and seem to be extremely beneficial against the threats I perceive to my data.

  • @haroldstork@lemm.ee
    link
    fedilink
    English
    52 years ago

    I love em all, especially btrfs. But I have to stay away from xfs. Had so many weird issues with it that made no sense.

  • @blackstrat@lemmy.fwgx.uk
    link
    fedilink
    English
    132 years ago

    I have enough to think about without the damn file system getting complicated. So plain old ext4. It’s stable, it works it’s great.

    I used btrfs once and it went really badly. When it gets corrupted it refuses to even let you mount read only. The documentation isn’t good and you end up finding obscure wiki’s with big warnings to only run these commands if you know what you’re doing - but of course I don’t, there’s no where to learn it and the only people who do know are the developers who wrote the file system. No thanks! It holds your data captive, so you better have some spare time and some backups. Never again.

  • terribleplan
    link
    fedilink
    English
    42 years ago

    Ext4 because it is rock solid and a reasonable foundation for Gluster. Moving off of ZFS to scale beyond what a single server can handle. I would still run ZFS for single-server many-drive situations, though MDADM is actually pretty decent honestly.

  • @lod@angry.expert
    link
    fedilink
    English
    322 years ago

    Ext4, my needs are simple and in all the years I’ve been using extX, never had a problem

    • Outcide
      link
      fedilink
      English
      182 years ago

      Same. I tried btrfs and ended up with a corrupted drive. I’ve never had ext4 fail on me in a way that wasn’t recoverable. Boring and safe are features I like in my filesystems.

  • sophs
    link
    fedilink
    English
    62 years ago

    Btrfs, because of compression. And I’ve never had any issues with it.

  • @Hopfgeist@feddit.de
    link
    fedilink
    English
    6
    edit-2
    2 years ago

    ZFS raidz1 or raidz2 on NetBSD for mass storage on rotating disks, journaled FFS on RAID1 on SSD for system disks, as NetBSD cannot really boot from zfs (yet).

    ZFS because it has superior safeguards against corruption, and flexible partitioning; FFS because it is what works.

      • @Hopfgeist@feddit.de
        link
        fedilink
        42 years ago

        What are the advantages of raid10 over zfs raidz2? It requires more disk space per usable space as soon as you have more than 4 disks, it doesn’t have zfs’s automatic checksum-based error correction, and is less resilient, in general, against multiple disk failures. In the worst case, two lost disks can mean the loss of the whole pack, whereas raidz2 can tolerate the loss of any 2 disks. Plus, with raid you still need an additional volume manager and filesystem.

        • Sifr Moja
          link
          fedilink
          -12 years ago

          @Hopfgeist Speed on large spinning disks. Faster rebuilds. Less chance of complete failure because or URE.

  • SeriousBug
    link
    fedilink
    English
    42 years ago

    ext4 on an mdadm raid. It works well enough, and supports growing your array.

    Although if I rebuilt this from scratch, I would skip mdadm and just let minio control all the drives. Minio has an S3 compatible API, which I’d then mount into whatever apps need it.

    • stephenc
      link
      fedilink
      English
      22 years ago

      Love mdadm, it’s simple and straightforward.

    • @mattes@lemmy.kussi.me
      link
      fedilink
      English
      52 years ago

      Love MinIO but it’s not a filesystem and mounting object storage as a filesystem is not a great experience (speaking from commercial experience).

      • @aksdb@feddit.de
        link
        fedilink
        English
        32 years ago

        Same experience here. S3 is essentially a key/value store to simply put and retrieve large values/blobs. Everything resembling filesystem features is just convention over how keys are named. Comminication uses HTTP, so there is a lot of overhead when working with it as an FS.

        In the web you can use these properties to your advantage: you can talk to S3 with simple HTTP clients, you can use reverse proxies, you can put a CDN in front and have a static file server.

        But FS utils are almost always optimized for instant block based access and fast metadata responses. Something simple like a find will fuck you over with S3.

  • 1337
    link
    fedilink
    English
    22 years ago

    ZFS on file server, fully luks encrypted btrfs on desktop, and probably ext4 or whatever is default on the buntus for laptop and work desktop.

    ZFS on freenas/truenas has been rock solid for 10+ year raid. I love working with btrfs snapshots and the ease of adding drives on demand to expand. I don’t think much about ext4 on those systems.

  • @rubii@lm.inu.is
    link
    fedilink
    English
    42 years ago

    Just ext4 pooled together with mergerfs for my media files. Seems to fit my use perfectly.

  • @vagrantprodigy@lemmy.whynotdrs.org
    link
    fedilink
    English
    22 years ago

    XFS for the moment, but transitioning towards ZFS. I’d never touch btrfs again, it simply is not resilient or as recoverable as a quality file system should be.