XFS peculiar behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all!

I have come across the following peculiar behavior in XFS and i would appreciate any information anyone
could provide.
In our lab we have a system that has twelve 500GByte hard disks (total capacity 6TByte), connected to an Areca (ARC-1680D-IX-12) SAS storage controller. The disks are configured as a RAID-0 device. Then I create a clean XFS filesystem on top of the raid volume, using the whole capacity. We use this test-setup to measure performance improvement for a TPC-H experiment. We copy the database over the clean XFS filesystem using the cp utility. The database used in our experiments is 56GBytes in size (data + indices). The problem is that i have noticed that XFS may - not all times - split a table over a large disk distance. For example in one run i have noticed that a file of 13GByte is split over a 4,7TByte distance (I calculate this distance by subtracting the final block used for the file with the first one. The two disk blocks values are acquired using the
FIBMAP ioctl).
Is there some reasoning behind this (peculiar) behavior? I would expect that since the underlying storage is so large, and the dataset is so small, XFS would try to minimize disk seeks and thus place the file sequentially in disk. Furthermore, I understand that there may be some blocks left unused by XFS between subsequent file blocks used in order to handle any write appends that may come afterward. But i wouldn't expect such a large splitting of a single
file.
        Any help?

Thanks in advance,
Yannis Klonatos

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux