RE: [net-next v3 1/2] devlink: Support setting max_io_eqs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> From: Jakub Kicinski <kuba@xxxxxxxxxx>
> Sent: Friday, April 5, 2024 7:44 PM
> 
> On Fri, 5 Apr 2024 03:13:36 +0000 Parav Pandit wrote:
> > Netdev qps (txq, rxq pair) channels created are typically upto num cpus by
> driver, provided it has enough IO event queues upto cpu count.
> >
> > Rdma qps are far more than netdev qps as multiple process uses them and
> they are per user space process resource.
> > Those applications use number of QPs based on number of cpus and
> number of event channels to deliver notifications to user space.
> 
> Some other drivers (e.g. intel) support multiple queues per core in netdev.
> For mlx5 I think AF_XDP may be a good example (or used to be) where there
> may be more than one queue?
>
Yes, there may be multiple netdev queues which can be connected to a eq.
For example, as you described, mlx5 xdp, also mlx5 creates a multiple txq per traffic class per channel which are linked to a single eq of the channel.
But still those txq are per channel AFAIK.
 
> So I think the question still stands even for netdev.
> We should document whether number of EQs contains the number of Rx/Tx
> queues.
> 
I believe number of txq, rxqs can be more than the number of EQs connecting to same EQ.
Netdev channels have more accurate linkage to EQs, compared to raw txq/rxqs.

> > Driver uses the IRQs dynamically upto the PCI's limit based on supported IO
> event queues.
> 
> Right but one IRQ <> one EQ? Typically / always?
Typically yes, one IRQ <> one EQ.
> SFs "share" the IRQs with PF IIRC, do they share EQs?
>
SFs do not share EQs. Yes, SFs have their own dedicated EQs.
You remember right, that they share IRQs.
 
> > > The next patch says "maximum IO event queues which are typically
> > > used to derive the maximum and default number of net device channels"
> > > It may not be obvious to non-mlx5 experts, I think it needs to be
> > > better documented.
> > I will expand the documentation in .../networking/devlink/devlink-port.rst.
> >
> > I will add below change to the v4 that has David's comments also
> addressed.
> > Is this ok for you?
> 
> Looks like a good start but I think a few more sentences describing the
> relation to other resources would be good.
>
I think EQs limited object that does not have more wider relation in the stack.
Relation to IRQ is probably a good addition to do.
Along with below changes, will add the reference to IRQ too in v4.
 
> > --- a/Documentation/networking/devlink/devlink-port.rst
> > +++ b/Documentation/networking/devlink/devlink-port.rst
> > @@ -304,6 +304,11 @@ When user sets maximum number of IO event
> queues
> > for a SF or  a VF, such function driver is limited to consume only
> > enforced  number of IO event queues.
> >
> > +IO event queues deliver events related to IO queues, including
> > +network device transmit and receive queues (txq and rxq) and RDMA
> Queue Pairs (QPs).
> > +For example, the number of netdevice channels and RDMA device
> > +completion vectors are derived from the function's IO event queues.





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux