Re: Best filesystems ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ ... ]

>> But the ig deal here is that's not something that a filesystem
>> targeted at high bandwidth multistreaming loads should be
>> optimized for, at least by default.

> Has someone documented (recently, and correctly) what
> filesystems (XFS, ext3, ext4, reiser, JFS, VFAT etc.) are best
> for what tasks (mailboxes, database, general computing, video
> streaming etc.) ?

That's impossible. In part because it would take a lot of work,
but mostly because file system performance is so anisotropic with
storage and application profiles, and there are too many variables.
File systems also have somewhat different features. I have written
some impressions on what matters here:

  http://www.sabi.co.uk/blog/0804apr.html#080415

The basic performance metric for a file systems used to be which
percent of the native performance of a hard disk it could deliver
to a single process, but currently most file systems are at 90%
or higher for most of that.

My current impressions are:

* Reiser3 is nice with the right parameters for the few cases
  where there is a significant number of small files on a small
  storage system accessed by a single process.

* 'ext2' is nice for small freshly loaded filesystems with not
  too big files, and for MS-Windows compatibility.

* 'ext3' is good for widespread compatibility and for somewhat
  "average", small or middling filesystems up to 2TB.

* 'ext4' is good for inplace upgradeability from 'ext3', and some
  more scalability and features, but I can't see any real reason
  why it was developed but offering an in-place upgrade path to RH
  customers, given that JFS and XFS were there are fairly mature
  already. I think it has a bit more scalability and performance
  than 'ext3' (especially better internal parallelism).

* JFS is good for almost everything, including largish filesystems
  on somewhat largish systems with lots of processes accessing
  lots of files, and works equally well on 32b and 64b, is very
  stable, and has a couple of nice features. Its major downside is
  less care than XFS for barriers. I think that it can support
  well filesystems up to 10-15TB, and perhaps beyond. It should
  have been made the default for Linux for at least a decade
  instead of 'ext3'.

* XFS is like JFS, and with somewhat higher scalability both as to
  sizes and as to higher internal parallelism in the of multiple
  processes accessing the same file, and has a couple of nice
  features (mostly barrier support, but also small blocks and large
  inodes). Its major limitation are internal complexity and should
  only be used on 64b systems. It can support single filesystems
  larger than 10-15TB, but that's stretching things.

* Lustre works well as a network parallel large file streaming
  filesystem. It is however somewhat unstable and great care has to
  be taken in integration testing because of that.

I currently think that JFS, XFS and Lustre cover more or less all
common workloads. I occasionally use 'ext2' or NTFS for data
exchange between Linux and MS-Windows.

There are some other interesting ones:

* UDF could be a very decent small-average sized file systems,
  especially for interchange, but also general purpose use.
  Implementations are a bit lacking.

* OCFS2 used in non cluster mode works well, and has pretty decent
  performance, and can be used in shared-storage mode too, but it
  seems still a bit too unstable.

* NILFS2 seems just the right thing for SSD based file systems,
  and with a collector could be a general purpose file systems.

* GlusterFS seems quite interesting for the distributed case.

Reiser4 also looked fairly good, but it has been sort of succeeded
by BTRFS (whose author used to work on Reiser4), but BTRFS seems
like a non entirely justified successor to 'ext4', but it has also
a few nice extra features builtin that I am not sure really need to
be built in ( have the same feeling for ZFS).

Then there are traditional file systems which also have some
special interest, like OpenAFS or GFS2.

The major current issue with all of these is 'fsck' times and lack
of scalability to single-pool filesystems larger than several TB,
but doing that well is still a research issue (which hasn't stopped
happy-go-lucky people from creating much larger filesystems,
usually with XFS, "because you can", but good luck to them).

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux