On Jul 25, 2014, at 6:38 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > On Fri, Jul 25, 2014 at 10:52:57AM -0700, Zach Brown wrote: >> On Fri, Jul 25, 2014 at 01:37:19PM -0400, Abhijith Das wrote: >>> Hi all, >>> >>> The topic of a readdirplus-like syscall had come up for discussion at last year's >>> LSF/MM collab summit. I wrote a couple of syscalls with their GFS2 implementations >>> to get at a directory's entries as well as stat() info on the individual inodes. >>> I'm presenting these patches and some early test results on a single-node GFS2 >>> filesystem. >>> >>> 1. dirreadahead() - This patchset is very simple compared to the xgetdents() system >>> call below and scales very well for large directories in GFS2. dirreadahead() is >>> designed to be called prior to getdents+stat operations. >> >> Hmm. Have you tried plumbing these read-ahead calls in under the normal >> getdents() syscalls? > > The issue is not directory block readahead (which some filesystems > like XFS already have), but issuing inode readahead during the > getdents() syscall. > > It's the semi-random, interleaved inode IO that is being optimised > here (i.e. queued, ordered, issued, cached), not the directory > blocks themselves. Sure. > As such, why does this need to be done in the > kernel? This can all be done in userspace, and even hidden within > the readdir() or ftw/ntfw() implementations themselves so it's OS, > kernel and filesystem independent...... That assumes sorting by inode number maps to sorting by disk order. That isn't always true. Cheers, Andreas
Attachment:
signature.asc
Description: Message signed with OpenPGP using GPGMail