>>> I have a use case where I'd like to forward the use of XFS. This is for >>> large (multi-GB, say anywhere from 5GB to 300GB) individual files, such as >>> what you'd see under a database's data file / tablespace. >> Step 1) Use XFS. >> Nothing, and I do mean nothing comes close to reliability and consistent >> performance. > I have been running iozone benchmarks, [ ... ] I think that it is exceptionally difficult to get useful results out of Iozone... >>> My database vendor (who, coincidentally markets their own >>> filesystems and operating systems) says that there are >>> certain problems under XFS with specific mention of >>> corruption issues, if a single root or the metadata become >>> corrupted, the entire filesystem is gone, If that's bad enough it applies to any file system out there except FAT and Reiser, as they store some metadata with each block. ZFS and BTRFS may have something similar. But it is not an issue. >>> and it has performance issues on a multi-threaded workload, >>> caused by the single root filesystem for metadata becoming a >>> bottleneck. That's actually more of a problem with Lustre, in extreme cases. >> XFS has anything but performance problems on multithreaded >> workloads. It is *the* best of the Linux filesystems >> (actually... possibly any file system anywhere) for >> multithreaded IO. That's actually multithreaded IO to the same file, for multithreaded IO to different files JFS (and allegedly 'ext4') are also fairly good. > Well - I mentioned it above. Their current recommendation for > Linux is to stick with ext3... and for big file/big IO > operations, switch to ext4. That's just about because those are the file systems that are "qualified", and 'ext3' defaults give the lowest risks in case the application environment is misdesigned and relies on 'O_PONIES'. > [ ... ] "well, ext3 has problems whenever the kernel journal > thread wakes up to flush under heavy I/O, That actually happens with every file system, and it is one of several naive misdesigns in the Linux IO subsystem. The default Linux page cache flusher parameters are often too "loose" by a 1-2 orders of magnitude, and this can cause serious problems. Nedver mind that the page cache In any case the Linux page cache itself is also a bit of a joke, a (hopefully) a DBMS will not use it anyhow, but use direct IO, and XFS is targeted at direct IO, large file, multistreaming loads. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs