> From: Haiyang Zhang <haiyangz@xxxxxxxxxxxxx> > Sent: Saturday, August 21, 2021 2:18 PM > To: Dexuan Cui <decui@xxxxxxxxxxxxx>; linux-hyperv@xxxxxxxxxxxxxxx; > > > > 4) support up to 64 queues per net interface (it was 16). It looks like > > the default number of queues is also 64 if the VM has >=64 CPUs? -- > > should we add a new field apc->default_queues and limit it to 16 or 32? > > We'd like to make sure typically the best performance can be achieved > > with the default number of queues. > I found on a 40 cpu VM, the mana_query_vport_cfg() returns max_txq:32, > max_rxq:32, so I didn't further reduce the number (32) from PF. > > That's also the opinion from the host team -- if they upgrade the NIC > HW in the future, they can adjust the setting from PF side without > requiring VF driver change. Ah, I forgot this. Thanks for the explanation! > > 5) If the VM has >=64 CPUs, with the patch we create 1 HWC EQ and 64 NIC > > EQs, and IMO the creation of the last NIC EQ fails since now the host PF > > driver allows only 64 MSI-X interrupts? If this is the case, I think > > mana_probe() -> mana_create_eq() fails and no net interface will be > > created. It looks like we should create up to 63 NIC EQs in this case, > > and make sure we don't create too many SQs/RQs accordingly. > > > > At the end of mana_gd_query_max_resources(), should we add something > > like: > > if (gc->max_num_queues >= gc->num_msix_usable -1) > > gc->max_num_queues = gc->num_msix_usable -1; > As said, the PF allows 32 queues, and 64 MSI-X interrupts for now. > The PF should increase the MSI-X limit if the #queues is increased to > 64+. Makes sense. My description was a false alarm. > But for robustness, I like your idea that add a check in VF like above. Thanks!