Re: [PATCH mlx5-next v7 0/4] Dynamically assign MSI-X vectors count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 25, 2021 at 02:36:46PM -0300, Jason Gunthorpe wrote:
> On Thu, Mar 25, 2021 at 12:21:44PM -0500, Bjorn Helgaas wrote:
> 
> > NVMe and mlx5 have basically identical functionality in this respect.
> > Other devices and vendors will likely implement similar functionality.
> > It would be ideal if we had an interface generic enough to support
> > them all.
> > 
> > Is the mlx5 interface proposed here sufficient to support the NVMe
> > model?  I think it's close, but not quite, because the the NVMe
> > "offline" state isn't explicitly visible in the mlx5 model.
> 
> I thought Keith basically said "offline" wasn't really useful as a
> distinct idea. It is an artifact of nvme being a standards body
> divorced from the operating system.

I think that was someone else who said that.

FWIW, the nvme "offline" state just means a driver can't use the nvme
capabilities of the device. You can bind a driver to it if you want, but
no IO will be possible, so it's fine if you bind your VF to something
like vfio prior to starting a VM, or just not have a driver bound to
anything during the intial resource assignment.
 
> In linux offline and no driver attached are the same thing, you'd
> never want an API to make a nvme device with a driver attached offline
> because it would break the driver.
> 
> So I think it is good as is (well one of the 8 versions anyhow).
> 
> Keith didn't go into detail why the queue allocations in nvme were any
> different than the queue allocations in mlx5. 

The NVMe IO queue resources are assignable just like the MSIx vectors.
But they're not always assigned 1:1. For example:

  NVMe has an admin queue that always requires an interrupt vector. Does
  the VM driver want this vector to share with the IO queues, or do we
  want a +1 vector for that queue? 

  Maybe the VM is going to use a user space polling driver, so now you
  don't even need MSIx vectors on the function assigned to that VM. You
  just need to assign the IO queue resouces, and reserve the MSIx
  resources for another function.

  The Linux nvme driver allows a mix of poll + interrupt queues, so the
  user may want to allocate more IO queues than interrupts.

A kernel interface for assigning interrupt vectors gets us only halfway
to configuring the assignable resources.

> I expect they can probably work the same where the # of interrupts is
> an upper bound on the # of CPUs that can get queues and the device,
> once instantiated, could be configured for the number of queues to
> actually operate, if it wants.



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux