Re: [PATCH blk-next 1/2] blk-mq-rdma: Delete not-used multi-queue RDMA map queue code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




From: Leon Romanovsky <leonro@xxxxxxxxxx>

The RDMA vector affinity code is not backed up by any driver and always
returns NULL to every ib_get_vector_affinity() call.

This means that blk_mq_rdma_map_queues() always takes fallback path.

Fixes: 9afc97c29b03 ("mlx5: remove support for ib_get_vector_affinity")
Signed-off-by: Leon Romanovsky <leonro@xxxxxxxxxx>

So you guys totally broken the nvme queue assignment without even
telling anyone?  Great job!

Who is "you guys" and it wasn't silent either? I'm sure that Sagi knows the craft.
https://lore.kernel.org/linux-rdma/20181224221606.GA25780@xxxxxxxx/

commit 759ace7832802eaefbca821b2b43a44ab896b449
Author: Sagi Grimberg <sagi@xxxxxxxxxxx>
Date:   Thu Nov 1 13:08:07 2018 -0700

     i40iw: remove support for ib_get_vector_affinity

....

commit 9afc97c29b032af9a4112c2f4a02d5313b4dc71f
Author: Sagi Grimberg <sagi@xxxxxxxxxxx>
Date:   Thu Nov 1 09:13:12 2018 -0700

     mlx5: remove support for ib_get_vector_affinity

Thanks

Yes, basically usage of managed affinity caused people to report
regressions not being able to change irq affinity from procfs.

Back then I started a discussion with Thomas to make managed
affinity to still allow userspace to modify this, but this
was dropped at some point. So currently rdma cannot do
automatic irq affinitization out of the box.



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux