On Wed, May 23, 2018 at 9:47 AM, jianchao.wang <jianchao.w.wang@xxxxxxxxxx> wrote: > Hi Omar > > Thanks for your kindly response. > > On 05/23/2018 04:02 AM, Omar Sandoval wrote: >> On Tue, May 22, 2018 at 10:48:29PM +0800, Jianchao Wang wrote: >>> Currently, kyber is very unfriendly with merging. kyber depends >>> on ctx rq_list to do merging, however, most of time, it will not >>> leave any requests in ctx rq_list. This is because even if tokens >>> of one domain is used up, kyber will try to dispatch requests >>> from other domain and flush the rq_list there. >> >> That's a great catch, I totally missed this. >> >> This approach does end up duplicating a lot of code with the blk-mq core >> even after Jens' change, so I'm curious if you tried other approaches. >> One idea I had is to try the bio merge against the kqd->rqs lists. Since >> that's per-queue, the locking overhead might be too high. Alternatively, > > Yes, I used to make a patch as you say, try the bio merge against kqd->rqs directly. > The patch looks even simpler. However, because the khd->lock is needed every time > when try bio merge, there maybe high contending overhead on hkd->lock when cpu-hctx > mapping is not 1:1. > >> you could keep the software queues as-is but add our own version of >> flush_busy_ctxs() that only removes requests of the domain that we want. >> If one domain gets backed up, that might get messy with long iterations, >> though. > > Yes, I also considered this approach :) > But the long iterations on every ctx->rq_list looks really inefficient. Right, this list can be quite long if dispatch token is used up. You might try to introduce per-domain list into ctx directly, then 'none' may benefit from this change too since bio merge should be done on the per-domain list actually. Thanks, Ming Lei