[PATCH 0/2] optimise blk_try_enter_queue()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kill extra rcu_read_lock/unlock() pair in blk_try_enter_queue().
Testing with io_uring (high batching) with nullblk:

Before:
3.20%  io_uring  [kernel.vmlinux]  [k] __rcu_read_unlock
3.05%  io_uring  [kernel.vmlinux]  [k] __rcu_read_lock

After:
2.52%  io_uring  [kernel.vmlinux]  [k] __rcu_read_unlock
2.28%  io_uring  [kernel.vmlinux]  [k] __rcu_read_lock

Doesn't necessarily translates into 1.4% perfofrmance improvement
but nice to have.

Pavel Begunkov (2):
  percpu_ref: percpu_ref_tryget_live() version holding RCU
  block: kill extra rcu lock/unlock in queue enter

 block/blk-core.c                |  2 +-
 include/linux/percpu-refcount.h | 31 +++++++++++++++++++++----------
 2 files changed, 22 insertions(+), 11 deletions(-)

-- 
2.33.1




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux