Re: [PATCH] blk-mq-rdma: remove queue mapping helper for rdma devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 23, 2023 at 10:03:25AM -0300, Jason Gunthorpe wrote:
> > > > Given that nvme-rdma was the only consumer, do you prefer this goes from
> > > > the nvme tree?
> > > 
> > > Sure, it is probably fine
> > 
> > I tried to do it two+ years ago:
> > https://lore.kernel.org/all/20200929091358.421086-1-leon@xxxxxxxxxx
> 
> Christoph's points make sense, but I think we should still purge this
> code.

Given that we don't keep dead code around in the kernel as policy
we should probably remove it.  That being said, I'm really sad about
this, as I think what the RDMA code does here right now is pretty
broken.

> If we want to do proper managed affinity the right RDMA API is to
> directly ask for the desired CPU binding when creating the CQ, and
> optionally a way to change the CPU binding of the CQ at runtime.

Chanigng the bindings causes a lot of nasty interactions with CPU
hotplug.  The managed affinity and the way blk-mq interacts with it
is designed around the hotunplug notifier quiescing the queues,
and I'm not sure we can get everything right without a strict
binding to a set of CPUs.

> This obfuscated 'comp vector number' thing is nonsensical for a kAPI -
> creating a CQ on a random CPU then trying to backwards figure out what
> CPU it was created on is silly.

Yes.



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux