High Fragmentation with XFS and NFS Sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, hope someone can help me here.

I'm exporting some XFS fs's to ESX via NFS with the sync option enabled. I'm
seeing really heavy fragmentation when multiple VM's are copied onto the
share at the same time. I'm also seeing kmem_alloc failures, which is probably the biggest problem as this effectively takes everything down.

Underlying storage is a Ceph RBD, the server the FS is running on, is running kernel 4.5.7. Mount options are currently default. I'm seeing Millions of extents, where the ideal is listed as a couple of thousand when running xfs_db, there is only a couple of 100 files on the FS. It looks like roughly the extent sizes roughly match the IO size that the VM's were written to XFS with. So it looks like each parallel IO thread is being allocated next to each other rather than at spaced out regions of the disk.


From what I understand, this is because each NFS write opens and closes the
file which throws off any chance that XFS will be able to use its allocation
features to stop parallel write streams from interleaving with each other.

Is there anything I can tune to try and give each write to each file a
little bit of space, so that it at least gives readahead a chance when
reading, that it might hit at least a few MB of sequential data?

I have read that inode32 allocates more randomly compared to inode64, so I'm
not sure if it's worth trying this as there will likely be less than a 1000
files per FS.

Or am I best just to run fsr after everything has been copied on?

Thanks for any advice
Nick
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux