Hi all, I don't have a good track record with XFS since I got rid of ReiserFS a long time ago. I decided XFS was a good idea on servers, while I tested BTRFS on various less important devices. So far, XFS betrayed me far more often (a few times) than BTRFS (never). Last time was yesterday, on a root filesystem with "Block out of range: block 0x17b9814b0, EOFS 0x12a000" "I/O Error Detected. Shutting down filesystem" (shutting down the root filesystem is pretty hard). Some threads on this ML discuss a similar problem, related to partitioning and logical sectors located just after the end of the partition. The problem here does not seem to be the same, as the requested block is very far out of bound (2 orders of magnitude too far), and I use a recent Debian stock kernel with every security patch. My question is : should I trust XFS for small root filesystems (/, /tmp, /var on LVM sitting within md-RAID1 smallish partition), or is BTRFS finally trusty enough for a general purpose cluster (still root et al. filesystems), or do you guys just use the distro-recommended setup (typically Ext4 on plain disks) ? Debian stretch with 4.9.110-3+deb9u4 kernel. Ceph 12.2.8 on bluestore (not related to the question). Partial output of lsblk /dev/sdc /dev/nvme0n1: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 447,1G 0 disk ├─sdc1 8:33 0 55,9G 0 part │ └─md0 9:0 0 55,9G 0 raid1 │ ├─oxygene_system-root 253:4 0 9,3G 0 lvm / │ ├─oxygene_system-tmp 253:5 0 9,3G 0 lvm /tmp │ └─oxygene_system-var 253:6 0 4,7G 0 lvm /var └─sdc2 8:34 0 29,8G 0 part [SWAP] nvme0n1 259:0 0 477G 0 disk ├─nvme0n1p1 259:1 0 55,9G 0 part │ └─md0 9:0 0 55,9G 0 raid1 │ ├─oxygene_system-root 253:4 0 9,3G 0 lvm / │ ├─oxygene_system-tmp 253:5 0 9,3G 0 lvm /tmp │ └─oxygene_system-var 253:6 0 4,7G 0 lvm /var ├─nvme0n1p2 259:2 0 29,8G 0 part [SWAP] TIA ! -- Nicolas Huillard _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com