RE: [PATCH v3 0/2] io_uring: add support for IORING_OP_GETDENTS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Jens Axboe
> Sent: 20 February 2021 18:29
> 
> On 2/20/21 10:44 AM, David Laight wrote:
> > From: Lennert Buytenhek
> >> Sent: 18 February 2021 12:27
> >>
> >> These patches add support for IORING_OP_GETDENTS, which is a new io_uring
> >> opcode that more or less does an lseek(sqe->fd, sqe->off, SEEK_SET)
> >> followed by a getdents64(sqe->fd, (void *)sqe->addr, sqe->len).
> >>
> >> A dumb test program for IORING_OP_GETDENTS is available here:
> >>
> >> 	https://krautbox.wantstofly.org/~buytenh/uringfind-v2.c
> >>
> >> This test program does something along the lines of what find(1) does:
> >> it scans recursively through a directory tree and prints the names of
> >> all directories and files it encounters along the way -- but then using
> >> io_uring.  (The io_uring version prints the names of encountered files and
> >> directories in an order that's determined by SQE completion order, which
> >> is somewhat nondeterministic and likely to differ between runs.)
> >>
> >> On a directory tree with 14-odd million files in it that's on a
> >> six-drive (spinning disk) btrfs raid, find(1) takes:
> >>
> >> 	# echo 3 > /proc/sys/vm/drop_caches
> >> 	# time find /mnt/repo > /dev/null
> >>
> >> 	real    24m7.815s
> >> 	user    0m15.015s
> >> 	sys     0m48.340s
> >> 	#
> >>
> >> And the io_uring version takes:
> >>
> >> 	# echo 3 > /proc/sys/vm/drop_caches
> >> 	# time ./uringfind /mnt/repo > /dev/null
> >>
> >> 	real    10m29.064s
> >> 	user    0m4.347s
> >> 	sys     0m1.677s
> >> 	#
> >
> > While there may be uses for IORING_OP_GETDENTS are you sure your
> > test is comparing like with like?
> > The underlying work has to be done in either case, so you are
> > swapping system calls for code complexity.
> 
> What complexity?

Evan adding commands to a list to execute later is 'complexity'.
As in adding more cpu cycles.

> > I suspect that find is actually doing a stat() call on every
> > directory entry and that your io_uring example is just believing
> > the 'directory' flag returned in the directory entry for most
> > modern filesystems.
> 
> While that may be true (find doing stat as well), the runtime is
> clearly dominated by IO. Adding a stat on top would be an extra
> copy, but no extra IO.

I'd expect stat() to require the disk inode be read into memory.
getdents() only requires the data of the directory be read.
So calling stat() requires a lot more IO.

The other thing I just realises is that the 'system time'
output from time is completely meaningless for the io_uring case.
All that processing is done by a kernel thread and I doubt
is re-attributed to the user process.

> > If you write a program that does openat(), readdir(), close()
> > for each directory and with a long enough buffer (mostly) do
> > one readdir() per directory you'll get a much better comparison.
> >
> > You could even write a program with 2 threads, one does all the
> > open/readdir/close system calls and the other does the printing
> > and generating the list of directories to process.
> > That should get the equivalent overlapping that io_uring gives
> > without much of the complexity.
> 
> But this is what take the most offense to - it's _trivial_ to
> write that program with io_uring, especially compared to managing
> threads. Threads are certainly a more known paradigm at this point,
> but an io_uring submit + reap loop is definitely not "much of the
> complexity". If you're referring to the kernel change itself, that's
> trivial, as the diffstat shows.

I've looked at the kernel code in io_uring.c.
Makes me pull my hair out (what's left of it - mostly beard).
Apart from saving system call costs I don't actually understand why
it isn't a userspace library?

Anyway, I thought the point of io_uring was to attempt to implement
asynchronous IO on a unix system.
If you want async IO you need to go back to the mid 1970s and pick
the ancestors of RSM/11M rather than those of K&R's unix.
That leads you to Ultrix and then Windows NT.

And yes, I have written code that did async IO under RSM/11M.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux