Re: [PATCH 2/2] io_uring: switch cancel_hash to use per list spinlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/30/22 00:34, Jens Axboe wrote:
On 5/29/22 4:50 PM, Pavel Begunkov wrote:
On 5/29/22 19:40, Jens Axboe wrote:
On 5/29/22 12:07 PM, Hao Xu wrote:
On 5/30/22 00:25, Jens Axboe wrote:
On 5/29/22 10:20 AM, Hao Xu wrote:
From: Hao Xu <howeyxu@xxxxxxxxxxx>

From: Hao Xu <howeyxu@xxxxxxxxxxx>

Use per list lock for cancel_hash, this removes some completion lock
invocation and remove contension between different cancel_hash entries

Interesting, do you have any numbers on this?

Just Theoretically for now, I'll do some tests tomorrow. This is
actually RFC, forgot to change the subject.


Also, I'd make a hash bucket struct:

struct io_hash_bucket {
      spinlock_t lock;
      struct hlist_head list;
};

rather than two separate structs, that'll have nicer memory locality too
and should further improve it. Could be done as a prep patch with the
old locking in place, making the end patch doing the per-bucket lock
simpler as well.

Sure, if the test number make sense, I'll send v2. I'll test the
hlist_bl list as well(the comment of it says it is much slower than
normal spin_lock, but we may not care the efficiency of poll
cancellation very much?).

I don't think the bit spinlocks are going to be useful, we should
stick with a spinlock for this. They are indeed slower and generally not
used for that reason. For a use case where you need a ton of locks and
saving the 4 bytes for a spinlock would make sense (or maybe not
changing some struct?), maybe they have a purpose. But not for this.

We can put the cancel hashes under uring_lock and completely kill
the hash spinlocking (2 lock/unlock pairs per single-shot). The code
below won't even compile and missing cancellation bits, I'll pick it
up in a week.

Even better would be to have two hash tables, and auto-magically apply
the feature to SINGLE_SUBMITTER, SQPOLL (both will have uring_lock held)
and apoll (need uring_lock after anyway).

My hope was that it'd take us closer to being able to use more granular
locking for hashing in general. I don't care too much about the
cancelation, but the normal hash locking would be useful to do.

However, for cancelations, under uring_lock would indeed be preferable
to doing per-bucket locks there. Guess I'll wait and see what your final
patch looks like, not sure why it'd be a ctx conditional?

It replaces 2 spin lock/unlock with one io_tw_lock() in the completion
path, which is done once per tw batch and grabbed anyway if
there is no contention (see handle_tw_list()).

It could be unconditional, but I'd say for those 3 cases we have
non-existing chance to regress perf/latency, but I can think of
some cases where it might screw latencies, all share io_uring
b/w threads.

Should benefit the cancellation path as well, but I don't care
about it as well.

What about io_poll_remove_all()?

As mentioned, it's not handled in the diff, but easily doable,
it should just traverse both hash tables.

--
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux