[PATCH 0/2] check the number of hw queues mapped to sw queues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Ming Lin <ming.l@xxxxxxxxxxx>

Please see patch 2 for detail bug description.

Say, on a machine with 8 CPUs, we create 6 io queues(blk-mq hw queues)
    
echo "transport=rdma,traddr=192.168.2.2,nqn=testiqn,nr_io_queues=6" \
            > /dev/nvme-fabrics
    
Then actually only 4 hw queues were mapped to CPU sw queues.
    
HW Queue 1 <-> CPU 0,4
HW Queue 2 <-> CPU 1,5
HW Queue 3 <-> None
HW Queue 4 <-> CPU 2,6
HW Queue 5 <-> CPU 3,7
HW Queue 6 <-> None

Back to Jan 2016, I send a patch:
[PATCH] blk-mq: check if all HW queues are mapped to cpu
http://www.spinics.net/lists/linux-block/msg01038.html

It adds check code to blk_mq_update_queue_map().
But it seems too aggresive because it's not an error that some hw queues
were not mapped to sw queues.

So this series just add a new function blk_mq_hctx_mapped() to check
how many hw queues were mapped. And the driver(for example, nvme-rdma)
that cares about it will do the check.

Ming Lin (2):
  blk-mq: add a function to return number of hw queues mapped
  nvme-rdma: check the number of hw queues mapped

 block/blk-mq.c           | 15 +++++++++++++++
 drivers/nvme/host/rdma.c | 11 +++++++++++
 include/linux/blk-mq.h   |  1 +
 3 files changed, 27 insertions(+)

-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux