On 3/22/20 10:09 AM, Pavel Begunkov wrote: >> /* not hashed, can run anytime */ >> if (!io_wq_is_hashed(work)) { >> - wq_node_del(&wqe->work_list, node, prev); >> - return work; >> + /* already have hashed work, let new worker get this */ >> + if (ret) { >> + struct io_wqe_acct *acct; >> + >> + /* get new worker for unhashed, if none now */ >> + acct = io_work_get_acct(wqe, work); >> + if (!atomic_read(&acct->nr_running)) >> + io_wqe_wake_worker(wqe, acct); >> + break; >> + } >> + wq_node_del(&wqe->work_list, &work->list); >> + ret = work; >> + break; >> } >> >> /* hashed, can run if not already running */ >> - hash = io_get_work_hash(work); >> - if (!(wqe->hash_map & BIT(hash))) { >> + new_hash = io_get_work_hash(work); >> + if (wqe->hash_map & BIT(new_hash)) >> + break; > > This will always break for subsequent hashed, as the @hash_map bit is set. > Isn't it? And anyway, it seems it doesn't optimise not-contiguous same-hashed > requests, e.g. > > 0: Link(hash=0) > 1: Link(hash=1) > 2: Link(hash=0) > 3: Link(not_hashed) > 4: Link(hash=0) > ... Yeah, I think I shuffled a bit too much there, should only break on that if hash != new_hash. Will fix! >> @@ -530,6 +565,24 @@ static void io_worker_handle_work(struct io_worker *worker) >> work = NULL; >> } >> if (hash != -1U) { >> + /* >> + * If the local list is non-empty, then we >> + * have work that hashed to the same key. >> + * No need for a lock round-trip, or fiddling >> + * the the free/busy state of the worker, or >> + * clearing the hashed state. Just process the >> + * next one. >> + */ >> + if (!work) { >> + work = wq_first_entry(&list, >> + struct io_wq_work, >> + list); > > Wouldn't it just drop a linked request? Probably works because of the > comment above. Only drops if if we always override 'work', if it's already set we use 'work' and don't grab from the list. So that should work for links. >> + if (work) { >> + wq_node_del(&list, &work->list); > > There is a bug, apparently from one of my commits, where it do > io_assign_current_work() but then re-enqueue and reassign new work, > though there is a gap for cancel to happen, which would screw > everything up. > > I'll send a patch, so it'd be more clear. However, this is a good > point to look after for this as well. Saw it, I'll apply when you have the Fixes line. I'll respin this one after. -- Jens Axboe