Re: [slightly OT] XFS vs. BTRFS vs. others as root/usr/var/tmp filesystems ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ya, sadly it looks like btrfs will never materialize as the next filesystem of the future.  Redhat as an example even dropped it from its future, as others probably will and have too.


On Sun, Sep 23, 2018 at 11:28 AM mj <lists@xxxxxxxxxxxxx> wrote:
Hi,

Just a very quick and simple reply:

XFS has *always* treated us nicely, and we have been using it for a VERY
long time, ever since the pre-2000 suse 5.2 days on pretty much all our
machines.

We have seen only very few corruptions on xfs, and the few times we
tried btrfs, (almost) always 'something' happened. (same for the few
times we tried reiserfs, btw)

So, while my story may be very anecdotical (and you will probably find
many others here claiming the opposite) our own conclusion is very
clear: we love xfs, and do not like btrfs very much.

MJ

On 09/22/2018 10:58 AM, Nicolas Huillard wrote:
> Hi all,
>
> I don't have a good track record with XFS since I got rid of ReiserFS a
> long time ago. I decided XFS was a good idea on servers, while I tested
> BTRFS on various less important devices.
> So far, XFS betrayed me far more often (a few times) than BTRFS
> (never).
> Last time was yesterday, on a root filesystem with "Block out of range:
> block 0x17b9814b0, EOFS 0x12a000" "I/O Error Detected. Shutting down
> filesystem" (shutting down the root filesystem is pretty hard).
>
> Some threads on this ML discuss a similar problem, related to
> partitioning and logical sectors located just after the end of the
> partition. The problem here does not seem to be the same, as the
> requested block is very far out of bound (2 orders of magnitude too
> far), and I use a recent Debian stock kernel with every security patch.
>
> My question is : should I trust XFS for small root filesystems (/,
> /tmp, /var on LVM sitting within md-RAID1 smallish partition), or is
> BTRFS finally trusty enough for a general purpose cluster (still root
> et al. filesystems), or do you guys just use the distro-recommended
> setup (typically Ext4 on plain disks) ?
>
> Debian stretch with 4.9.110-3+deb9u4 kernel.
> Ceph 12.2.8 on bluestore (not related to the question).
>
> Partial output of lsblk /dev/sdc /dev/nvme0n1:
> NAME                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> sdc                             8:32   0 447,1G  0 disk
> ├─sdc1                          8:33   0  55,9G  0 part
> │ └─md0                         9:0    0  55,9G  0 raid1
> │   ├─oxygene_system-root     253:4    0   9,3G  0 lvm   /
> │   ├─oxygene_system-tmp      253:5    0   9,3G  0 lvm   /tmp
> │   └─oxygene_system-var      253:6    0   4,7G  0 lvm   /var
> └─sdc2                          8:34   0  29,8G  0 part  [SWAP]
> nvme0n1                       259:0    0   477G  0 disk
> ├─nvme0n1p1                   259:1    0  55,9G  0 part
> │ └─md0                         9:0    0  55,9G  0 raid1
> │   ├─oxygene_system-root     253:4    0   9,3G  0 lvm   /
> │   ├─oxygene_system-tmp      253:5    0   9,3G  0 lvm   /tmp
> │   └─oxygene_system-var      253:6    0   4,7G  0 lvm   /var
> ├─nvme0n1p2                   259:2    0  29,8G  0 part  [SWAP]
>
> TIA !
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux