Re: [PATCH] blk-mq-rdma: remove queue mapping helper for rdma devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 23, 2023 at 02:05:15PM +0200, Leon Romanovsky wrote:
> On Wed, Mar 22, 2023 at 10:50:22AM -0300, Jason Gunthorpe wrote:
> > On Wed, Mar 22, 2023 at 03:00:08PM +0200, Sagi Grimberg wrote:
> > > 
> > > > > No rdma device exposes its irq vectors affinity today. So the only
> > > > > mapping that we have left, is the default blk_mq_map_queues, which
> > > > > we fallback to anyways. Also fixup the only consumer of this helper
> > > > > (nvme-rdma).
> > > > 
> > > > This was the only caller of ib_get_vector_affinity() so please delete
> > > > op get_vector_affinity and ib_get_vector_affinity() from verbs as well
> > > 
> > > Yep, no problem.
> > > 
> > > Given that nvme-rdma was the only consumer, do you prefer this goes from
> > > the nvme tree?
> > 
> > Sure, it is probably fine
> 
> I tried to do it two+ years ago:
> https://lore.kernel.org/all/20200929091358.421086-1-leon@xxxxxxxxxx

Christoph's points make sense, but I think we should still purge this
code.

If we want to do proper managed affinity the right RDMA API is to
directly ask for the desired CPU binding when creating the CQ, and
optionally a way to change the CPU binding of the CQ at runtime.

This obfuscated 'comp vector number' thing is nonsensical for a kAPI -
creating a CQ on a random CPU then trying to backwards figure out what
CPU it was created on is silly.

Jason



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux