Re: Maximum file system size of XFS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Samstag, 9. März 2013 schrieb Pascal:
> Hello,

Hi Pascal,

> I am asking you because I am insecure about the correct answer and
> different sources give me different numbers.
> 
> 
> My question is: What is the maximum file system size of XFS?
> 
> The official page says: 2^63 = 9 x 10^18 = 9 exabytes
> Source: http://oss.sgi.com/projects/xfs/
> 
> Wikipedia says 16 exabytes.
> Source: https://en.wikipedia.org/wiki/XFS
> 
> Another reference books says 8 exabytes (2^63).
> 
> 
> Can anyone tell me and explain what is the maximum file system size for
> XFS?

You can test it. The theoretical limit. Whether such a filesystem will work 
nicely with a real workload is, as pointed out, a different question.

1) Use a big enough XFS filesystem (yes, it has to be XFS for anything else 
that can carry a exabyte big sparse file)

merkaba:~> LANG=C mkfs.xfs -L justcrazy /dev/merkaba/zeit
meta-data=/dev/merkaba/zeit      isize=256    agcount=4, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


2) Create a insanely big sparse file

merkaba:~> truncate -s1E /mnt/zeit/evenmorecrazy.img
merkaba:~> ls -lh /mnt/zeit/evenmorecrazy.img
-rw-r--r-- 1 root root 1,0E Mär 11 22:37 /mnt/zeit/evenmorecrazy.img

(No, this won´t work with Ext4.)


3) Make a XFS file system into it:

merkaba:~> mkfs.xfs -L /mnt/zeit/evenmorecrazy.img

I won´t today. I tried that for gag during a linux performance and analysis 
training I held on a ThinkPad T520 with Sandybridge i5 2,50 GhZ, Intel SSD 
320 on an about 20 GiB XFS filesystem.

The mkfs command run for something like one or two hours. It was using quite 
some CPU and quite some SSD, but did not max out one of it.

The host XFS filesystem was almost full, so the image took just about those 
20 GiB.


4) Mount it and enjoy the output of df -hT.


5) Write to if it you dare. I did it, until the Linux kernel told something 
about "lost buffer writes". What I found strange is, that the dd writing to 
the 1E filesystem did not quit then with input/output error. It just ran on.


I didn´t test this with any larger size, but if size and time usage scales 
linearily it might be possible to create a 10EiB filesystem within 200 GiB 
host XFS and hum about a day of waiting :).

No, I do not suggest to use anything even just remotely like this in 
production.

And no, my test didn´t show that an 1EiB filesystem will work nicely with 
any real life workload.

Am I crazy for trying this? I might be :)

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux