[Question] on blk_mq_map_queues()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I have a question on the core-to-hw queue mapping in blk_mq_map_queues(). In looking at the latest version, we map the first $nr_queues cores to separate queues, and then map sibling threaded-cores to the same queue. First off, I am just curious why we always map the first batch of cores to separate queues.

As I understand for blk mq, we try to map cores which are physically close to the same hw queue (in the case of more cores than hw queues). So, by this rationale, we would not map the first $nr_queues cores to each separate queue, but would rather map core [0, $nr_cores/$nr_queues) to q0, [$nr_cores/$nr_queues, 2*$nr_cores/$nr_queues) to q1, and so on (assuming $nr_queues is evenly divisible into $nr_cores, for simplicity).

So what is the full desired mapping behaviour?

The specific problem I see is that for my 64-core system (no hyperthreading) and 16 hw queues, I see cores per 4-core cluster being mapped to different queues, when I would expect them to be mapped to same queue.

I know that we can add our own per-driver mapping function to solve, but I would expect that that generic mapper would cover a generic platform.

Thanks in advance,
John




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux