On Wed, Apr 18, 2012 at 11:36:07AM +1000, Dave Chinner wrote: > And it assumes that inode32 cannot do locality of > files at all, when in fact it has tunable locality through a syctl. > > Indeed, here's some the performance enhancing games SGI play that > can only be achieved by using the inode32 allocator: > > http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=linux&db=bks&fname=/SGI_Admin/LX_XFS_AG/ch07.html Ah, that's new to me. So with inode32 and sysctl fs.xfs.rotorstep=255 you can get roughly the same locality benefit for sequentially-written files as inode64? (Aside: if you have two processes writing files to two different directories, will they end up mixing their files in the same AG? That could hurt performance at readback time if reading them sequentially) I'm not really complaining about anything here except the dearth of readily-accessible information. If I download the whole admin book: http://techpubs.sgi.com/library/manuals/4000/007-4273-004/pdf/007-4273-004.pdf I see the option inode64 only mentioned once in passing (as being incompatible with ibound). So if there's detailled information on what exactly inode64 does and when to use it, it must be somewhere else. Here's my user story. As a newbie, my first test was to make a 3TB filesystem on a single drive, and I was doing a simple workload consisting of writing 1000 files per directory sequentially. I could achieve a sequential write speed of 75MB/s but only a sequential read speed of 25MB/s. After questioning this on the list and eventually finding that files were scattered around the disk (thanks to xfs_bmap). I was pointed to the inode64 option, which I had seen in the FAQ but hadn't realised how big a performance difference it would make. This wasn't just an idle benchmark: my main application is creating a corpus of files and then processing that corpus of files (either sequentially, or with multiple processes each working sequentially through subsections of the corpus) Regards, Brian. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs