Am Freitag, 6. April 2012 schrieb Stefan Ring: > > thanks for the detailed report. > > Thanks for the detailed and kind answer. > > > Can you try a few mount options for me both all together and if you > > have some time also individually. > > > > -o inode64 > > > > This allows inodes to be close to data even for >1TB > > filesystems. It's something we hope to make the default soon. > > The filesystem is not that large. It’s only 400GB. I turned it on > anyway. No difference. > > > -o filestreams > > > > This keeps data written in a single directory group together. > > Not sure your directories are large enough to really benefit > > from it, but it's worth a try. > > -o allocsize=4k > > > > This disables the agressive file preallocation we do in XFS, > > which sounds like it's not useful for your workload. > > inode64+filestreams: no difference > inode64+allocsize: no difference > inode64+filestreams+allocsize: no difference :( > > > For metadata intensive workloads like yours you would be much better > > using a non-striping raid, e.g. concatentation and mirroring instead > > of raid 5 or raid 6. I know this has a cost in terms of "wasted" > > space, but for IOPs bound workload the difference is dramatic. > > Hmm, I’m sure you’re right, but I’m out of luck here. If I had 24 > drives, I could think about a different organization. But with only 6 > bays, I cannot give up all that space. > > Although *in theory*, it *should* be possible to run fast for > write-only workloads. The stripe size is 64 KB (4x16), and it’s not > like data is written all over the place. So it should very well be > possible to write the data out in some reasonably sized and aligned > chunks. The filesystem partition itself is nicely aligned. And is XFS aligned to the RAID 6? What does xfs_info display on it? -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs