[PATCHSET 0/2] io_uring: improve handling of buffered writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



XFS/ext4/others all need to lock the inode for buffered writes. Since
io_uring handles any IO in an async manner, this means that for higher
queue depth buffered write workloads, we have a lot of workers
hammering on the same mutex.

Running a QD=32 random write workload on my test box yields about 200K
4k random write IOPS with io_uring. Looking at system profiles, we're
spending about half the time contending on the inode mutex. Oof.

For buffered writes, we don't necessarily need a huge amount of threads
issuing that IO. If we instead rely on normal flushing to take care of
getting the parallelism we need on the device side, we can limit
ourselves to a much lower depth. This still gets us async behavior on
the submission side.

With this small series, my 200K IOPS goes to 370K IOPS for the same
workload.

This issue came out of postgres implementing io_uring support, and
reporting some of the issues they saw.

-- 
Jens Axboe





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux