On Thu, 2016-01-14 at 11:25 -0700, Jens Axboe wrote: > On 01/14/2016 11:07 AM, Ming Lin wrote: > > From: Ming Lin <ming.l@xxxxxxxxxxxxxxx> > > > > Suppose that a system has 8 logical CPUs(4 cores with hyperthread) > > and that 5 hardware queues are provided by a block driver. > > With the current algorithm this will lead to the following assignment > > of logical CPU to hardware queue: > > > > HWQ 0: 0 1 > > HWQ 1: 2 3 > > HWQ 2: 4 5 > > HWQ 3: 6 7 > > HWQ 4: (none) > > > > One way to fix it is to change the algorithm so the assignment may be: > > > > HWQ 0: 0 1 > > HWQ 1: 2 3 > > HWQ 2: 4 5 > > HWQ 3: 6 > > HWQ 4: 7 > > This has been suggested before, but the previous mapping is actually > what I originally intended, so that it's symmetric. So by design, not a > bug. It's not that I'm completely adverse to changing it, but I've yet > to see a compelling reason to do so (and your patch doesn't have that > either, it just states that it changes the mapping from X to Y). Yes, my patch doesn't do that. Only to mention that's a possible fix. I'll remove that words to avoid confuse. My patch only checks if all HW queues are mapped. While developing NVMeOF driver, a new function blk_mq_alloc_request_hctx() was added. struct request *blk_mq_alloc_request_hctx( struct request_queue *q, int rw, unsigned int flags, unsigned int hctx_idx); === Author: Christoph Hellwig <hch@xxxxxx> Date: Mon Nov 30 19:45:48 2015 +0100 blk-mq: add blk_mq_alloc_request_hctx For some protocols like NVMe over Fabrics we need to be able to send initialization commands to a specific queue. === This function assumes all hctx are mapped, otherwise it will crash because hctx->tags is NULL. During the tests, different hw queue numbers are passed into NVMeOF driver. Then on my setup(8 logical cpus: 4core hyperthread), 5 hw queues will make it crash. Because queue 4 is not mapped. So we'd better check the mapping and fail blk_mq_init_queue() if not all HW queues are mapped. Agree? -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html