Re: [PATCH v2 0/3] cancel_hash per entry lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/6/22 20:02, Pavel Begunkov wrote:
On 6/6/22 08:06, Hao Xu wrote:
On 6/6/22 14:57, Hao Xu wrote:
From: Hao Xu <howeyxu@xxxxxxxxxxx>

Make per entry lock for cancel_hash array, this reduces usage of
completion_lock and contension between cancel_hash entries.

v1->v2:
  - Add per entry lock for poll/apoll task work code which was missed
    in v1
  - add an member in io_kiocb to track req's indice in cancel_hash

Tried to test it with many poll_add IOSQQE_ASYNC requests but turned out
that there is little conpletion_lock contention, so no visible change in
data. But I still think this may be good for cancel_hash access in some
real cases where completion lock matters.

Conceptually I don't mind it, but let me ask in what
circumstances you expect it to make a difference? And

I suppose there are cases where a bunch of users trying to access
cancel_hash[] at the same time when people use multiple threads to
submit sqes or they use IOSQE_ASYNC. And these io-workers or task works
run parallel on different CPUs.

what can we do to get favourable numbers? For instance,
how many CPUs io-wq was using?

It is not easy to construct manually since it is related with task
scheduling, like if we just issue many IOSQE_ASYNC polls in an
idle machine with many CPUs, there won't be much contention because of
different thread start time(thus they access cancel_hash at different
time





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux