On 8/1/2013 9:25 PM, Dave Chinner wrote: ... > So really, the numbers only reflect a difference in layout of the > files being tested. And using small direct IO means that the > filesystem will tend to fill small free spaces close to the > inode first, and so will fragment the file based on the locality of > fragmented free space to the owner inode. In the case of the new > filesystem, there is only large, contiguous free space near the > inode.... ... >> What can be >> done (as a user) to mitigate this effect? > > Buy faster disks ;) > > Seriously, all filesystems age and get significantly slower as they > get used. XFS is not really designed for single spindles - it's > algorithms are designed to spread data out over the entire device > and so be able to make use of many, many spindles that make up the > device. The behaviour it has works extremely well for this sort of > large scale scenario, but it's close to the worst case aging > behaviour for a single, very slow spindle like you are using. Hence > once the filesystem is over the "we have pristine, contiguous > freespace" hump on your hardware, it's all downhill and there's not > much you can do about it.... Wouldn't the inode32 allocator yield somewhat better results with this direct IO workload? With Markus' single slow spindle? It shouldn't fragment free space quite as badly in the first place, nor suffer from trying to use many small fragments surrounding the inode as in the case above. Whether or not inode32 would be beneficial to his real workload(s) I don't know. I tend to think it might make at least a small positive difference. However, given that XFS is trying to get away from inode32 altogether I can see why you wouldn't mention it, even if it might yield some improvement in this case. -- Stan _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs