Re: What FileSystems for large stores and very very large stores?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



----- Original Message -----
| I was learning about the different FS exists.
| I was working on systems that ReiserFS was the star but since there
| is
| no longer support from the creator there are other consolidations to
| be
| done.
| I want to ask about couple FS options.
| EXT4 which is amazing for one node but for more it's another story.
| I have heard about GFS2 and GlusterFS and read the docs and official
| materials from RH on them.
| In the RH docs it states the EXT4 limit files per directory is 65k
| and I
| had a directory which was pretty loaded with files and I am unsure
| exactly what was the size but I am almost sure it was larger the 65k
| files per directory.
| 
| I was considering using GlusterFS for a very large storage system
| with
| NFS front.
| I am still unsure EXT4 should or shouldn't be able to handle more
| then
| 16TB since the linux kernel ext4 docs at:
| https://www.kernel.org/doc/Documentation/filesystems/ext4.txt in
| section 2.1
| it states: * ability to use filesystems > 16TB (e2fsprogs support not
| available yet).
| so can I use it or not?? if there are no tools to handle this size
| then
| I cannot trust it.
| 
| I want to create a storage with more then 16TB based on GlusterFS
| since
| it allows me to use 2-3 rings FS which will allow me to put the
| storage
| in a form of:
| 1 client -> HA NFS servers -> GlusterFS cluster.
| 
| it seems to more that GlusterFS is a better choice then Swift since
| RH
| do provide support for it.
| 
| Every response will be appreciated.
| 
| Thanks,
| Eliezer

As someone who has some rather large volumes for research storage I will say that ALL of the file systems have limitations, *especially* in the case of failures.  I have typical volumes that range from 16TB up to 48TB and the big issue is when it comes to performing file system checks.  You see, there is a lot of information that gets loaded into memory in order to perform a file system check.  A number of years ago I was unable to perform a EXT4 file system check on a 15TB volume without consuming over 32GB of memory on a file system with very few files.  At the time, the file server only had 8GB of memory, so this presented a problem.

However, while this problem was solvable it was also subject to usage.  The file system in question only had large files on it.  These files were typically gigabytes in size, but for another filer, this time with 48GB of memory but a tens of millions of very small files, the file system check for it took nearly 96GB of memory in order to perform a file system check.

So far, without a doubt, XFS has been the best "overall" file system for our usages, but YMMV.  It would seem that Red Hat is also pushing it as the file system of choice going forward until something better ( btrfs *snicker* ) comes along.  XFS is also the recommended file system for use with GlusterFS so that makes it an easy choice too.

GlusterFS itself has some H/A built in.  You can talk to any of the GlusterFS servers via NFS and it will fully operate in an active/active manner so your diagram would be 1 client -> Gluster Cluster (via protocols supported by Gluster NFS/CIFS/NATIVE).  I have found it to be rather fragile as well in some respects and performance for some of my workloads just don't map well to it even though it looks like they should gain some benefit.  However, it does work seemingly well for other workloads and it is being actively developed.

GlusterFS also allows you to "import" existing file systems at a later time.  So feel free to start off with a standard XFS volume, but be mindful of the XFS options that GlusterFS requires, namely the inode size being 64K, then if you decide to add cluster to your storage infrastructure you can perform the said "import" function and then start replication or distributed file serving from Gluster.

-- 
James A. Peltier
Manager, IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone   : 778-782-6573
Fax     : 778-782-3045
E-Mail  : jpeltier@xxxxxx
Website : http://www.sfu.ca/itservices

“A successful person is one who can lay a solid foundation from the bricks others have thrown at them.” -David Brinkley via Luke Shaw
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos





[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux