Is XFS suitable for 350 million files on 20TB storage?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

i have a backup system running 20TB of storage having 350 million files.
This was working fine for month.

But now the free space is so heavily fragmented that i only see the
kworker with 4x 100% CPU and write speed beeing very slow. 15TB of the
20TB are in use.

Overall files are 350 Million - all in different directories. Max 5000
per dir.

Kernel is 3.10.53 and mount options are:
noatime,nodiratime,attr2,inode64,logbufs=8,logbsize=256k,noquota

# xfs_db -r -c freesp /dev/sda1
   from      to extents  blocks    pct
      1       1 29484138 29484138   2,16
      2       3 16930134 39834672   2,92
      4       7 16169985 87877159   6,45
      8      15 78202543 999838327  73,41
     16      31 3562456 83746085   6,15
     32      63 2370812 102124143   7,50
     64     127  280885 18929867   1,39
    256     511       2     827   0,00
    512    1023      65   35092   0,00
   2048    4095       2    6561   0,00
  16384   32767       1   23951   0,00

Is there anything i can optimize? Or is it just a bad idea to do this
with XFS? Any other options? Maybe rsync options like --inplace /
--no-whole-file?

Greets,
Stefan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux