Hi Max,
Yes, since nvmf is the only user of this function. Still waiting for comments on the suggested patch :)
Sorry for the late response (but I'm on vacation so I have an excuse ;)) I'm thinking that we should avoid trying to find an assignment when stuff like irqbalance daemon is running and changing the affinitization. This extension was made to apply optimal affinity assignment when the device irq affinity is lined up in a vector per core. I'm thinking that when we identify this is not the case, we immediately fallback to the default mapping. 1. when we get a mask, if its weight != 1, we fallback. 2. if a queue was left unmapped, we fallback. Maybe something like the following: -- diff --git a/block/blk-mq-rdma.c b/block/blk-mq-rdma.c index 996167f1de18..1ada6211c55e 100644 --- a/block/blk-mq-rdma.c +++ b/block/blk-mq-rdma.c @@ -35,17 +35,26 @@ int blk_mq_rdma_map_queues(struct blk_mq_tag_set *set, const struct cpumask *mask; unsigned int queue, cpu; + /* reset all CPUs mapping */ + for_each_possible_cpu(cpu) + set->mq_map[cpu] = UINT_MAX; + for (queue = 0; queue < set->nr_hw_queues; queue++) { mask = ib_get_vector_affinity(dev, first_vec + queue); if (!mask) goto fallback; - for_each_cpu(cpu, mask) - set->mq_map[cpu] = queue; + if (cpumask_weight(mask) != 1) + goto fallback; + + cpu = cpumask_first(mask); + if (set->mq_map[cpu] != UINT_MAX) + goto fallback; + + set->mq_map[cpu] = queue; } return 0; - fallback: return blk_mq_map_queues(set); } -- -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html