[PATCH 0/2] optimise blk_try_enter_queue()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kill extra rcu_read_lock/unlock() pair in blk_try_enter_queue().
Testing with io_uring (high batching) with nullblk:

Before:
3.20%  io_uring  [kernel.vmlinux]  [k] __rcu_read_unlock
3.05%  io_uring  [kernel.vmlinux]  [k] __rcu_read_lock

After:
2.52%  io_uring  [kernel.vmlinux]  [k] __rcu_read_unlock
2.28%  io_uring  [kernel.vmlinux]  [k] __rcu_read_lock

Doesn't necessarily translates into 1.4% perfofrmance improvement
but nice to have.

Pavel Begunkov (2):
  percpu_ref: percpu_ref_tryget_live() version holding RCU
  block: kill extra rcu lock/unlock in queue enter

 block/blk-core.c                |  2 +-
 include/linux/percpu-refcount.h | 31 +++++++++++++++++++++----------
 2 files changed, 22 insertions(+), 11 deletions(-)

-- 
2.33.1





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux