Thanks for the inputs, will address the review comments in the next version. Thanks, Hariprasad k On Wed, Jan 18, 2023 at 04:21:04PM +0530, Hariprasad Kelam wrote: > From: Subbaraya Sundeep <sbhatta@xxxxxxxxxxx> > > Current implementation is such that the number of Send queues (SQs) > are decided on the device probe which is equal to the number of online > cpus. These SQs are allocated and deallocated in interface open and c > lose calls respectively. > > This patch defines new APIs for initializing and deinitializing Send > queues dynamically and allocates more number of transmit queues for > QOS feature. > > Signed-off-by: Subbaraya Sundeep <sbhatta@xxxxxxxxxxx> > Signed-off-by: Hariprasad Kelam <hkelam@xxxxxxxxxxx> > Signed-off-by: Sunil Kovvuri Goutham <sgoutham@xxxxxxxxxxx> ... > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > index 88f8772a61cd..0868ae825736 100644 > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > @@ -758,11 +758,16 @@ int otx2_txschq_stop(struct otx2_nic *pfvf) > void otx2_sqb_flush(struct otx2_nic *pfvf) { > int qidx, sqe_tail, sqe_head; > + struct otx2_snd_queue *sq; > u64 incr, *ptr, val; > int timeout = 1000; > > ptr = (u64 *)otx2_get_regaddr(pfvf, NIX_LF_SQ_OP_STATUS); > - for (qidx = 0; qidx < pfvf->hw.tot_tx_queues; qidx++) { > + for (qidx = 0; qidx < pfvf->hw.tot_tx_queues + > +pfvf->hw.tc_tx_queues; nit: It seems awkward that essentially this is saying that the total tx queues is 'tot_tx_queues' + 'tc_tx_queues'. As I read 'tot' as being short for 'total'. Also, the pfvf->hw.tot_tx_queues + pfvf->hw.tc_tx_queues pattern is rather verbose and repeated often. Perhaps a helper would... help. Will add these changes in next version. > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c > b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c > index c1ea60bc2630..3acda6d289d3 100644 > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c ... > @@ -1688,11 +1693,13 @@ int otx2_open(struct net_device *netdev) > > netif_carrier_off(netdev); > > - pf->qset.cq_cnt = pf->hw.rx_queues + pf->hw.tot_tx_queues; > /* RQ and SQs are mapped to different CQs, > * so find out max CQ IRQs (i.e CINTs) needed. > */ > pf->hw.cint_cnt = max(pf->hw.rx_queues, pf->hw.tx_queues); > + pf->hw.cint_cnt = max_t(u8, pf->hw.cint_cnt, pf->hw.tc_tx_queues); nit: maybe this is nicer? *completely untested!* pf->hw.cint_cnt = max3(pf->hw.rx_queues, pf->hw.tx_queues), pf->hw.tc_tx_queues); Will add these changes in next version. ... > @@ -735,7 +741,10 @@ static void otx2_sqe_add_hdr(struct otx2_nic *pfvf, struct otx2_snd_queue *sq, > sqe_hdr->aura = sq->aura_id; > /* Post a CQE Tx after pkt transmission */ > sqe_hdr->pnc = 1; > - sqe_hdr->sq = qidx; > + if (pfvf->hw.tx_queues == qidx) > + sqe_hdr->sq = qidx + pfvf->hw.xdp_queues; > + else > + sqe_hdr->sq = qidx; nit: maybe this is nicer? *completely untested!* sqe_hdr = pfvf->hw.tx_queues != qidx ? qidx + pfvf->hw.xdp_queues : qidx; Will add these changes in next version. ...