> -----Original Message----- > From: Brian Norris [mailto:briannorris@xxxxxxxxxxxx] > Sent: Saturday, January 23, 2021 6:58 AM > To: Pkshih > Cc: Yan-Hsuan Chuang; Kalle Valo; linux-wireless; Bernie Huang > Subject: Re: [PATCH v4 3/8] rtw88: add napi support > > On Fri, Jan 15, 2021 at 1:26 AM Ping-Ke Shih <pkshih@xxxxxxxxxxx> wrote: > > --- a/drivers/net/wireless/realtek/rtw88/pci.c > > +++ b/drivers/net/wireless/realtek/rtw88/pci.c > > @@ -935,16 +935,49 @@ static void rtw_pci_tx_isr(struct rtw_dev *rtwdev, struct rtw_pci *rtwpci, > > ring->r.rp = cur_rp; > > } > > > > -static void rtw_pci_rx_isr(struct rtw_dev *rtwdev, struct rtw_pci *rtwpci, > > +static void rtw_pci_rx_isr(struct rtw_dev *rtwdev) > > +{ > > + struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv; > > + struct napi_struct *napi = &rtwpci->napi; > > + > > + napi_schedule(napi); > > I don't claim to be all that familiar with NAPI, but my understanding > is that you are scheduling a NAPI poll, but immediately after you > return here, you're re-enabling your RX interrupt. That doesn't sound > like how NAPI is supposed to work -- you're supposed to wait to > re-enable your interrupt until you're done with your polling function. > Ref: https://wiki.linuxfoundation.org/networking/napi > Will re-enable RX IMR until napi_poll function is done. > > +} > ... > > +static u32 rtw_pci_rx_napi(struct rtw_dev *rtwdev, struct rtw_pci *rtwpci, > > u8 hw_queue) > ... > > Are you sure you don't want any locking in rtw_pci_rx_napi()? > Previously, you held irq_lock for the entirety of rtw_pci_rx_isr(), > but now all the RX work is being deferred to a NAPI context, without > any additional lock. IIUC, that means you can be both handling RX and > other ISR operations at the same time. Is that intentional? > irq_lock is used to protect TX ring->queue. The TX skb(s) are queued into the queue, and unlink the skb until TX_OK_ISR is received. So, RX doesn't need to hold this lock. > > +static int rtw_pci_napi_poll(struct napi_struct *napi, int budget) > > +{ > > + struct rtw_pci *rtwpci = container_of(napi, struct rtw_pci, napi); > > + struct rtw_dev *rtwdev = container_of((void *)rtwpci, struct rtw_dev, > > + priv); > > + int work_done = 0; > > + > > + while (work_done < budget) { > > + u32 work_done_once; > > + > > + work_done_once = rtw_pci_rx_napi(rtwdev, rtwpci, > > + RTW_RX_QUEUE_MPDU); > > + if (work_done_once == 0) > > + break; > > + work_done += work_done_once; > > + } > > + if (work_done < budget) { > > + napi_complete_done(napi, work_done); > > + /* When ISR happens during polling and before napi_complete > > + * while no further data is received. Data on the dma_ring will > > + * not be processed immediately. Check whether dma ring is > > + * empty and perform napi_schedule accordingly. > > + */ > > + if (rtw_pci_get_hw_rx_ring_nr(rtwdev, rtwpci, NULL, NULL)) > > + napi_schedule(napi); > > + } > > + > > + return work_done; > > +} > > + > > +static void rtw_pci_napi_init(struct rtw_dev *rtwdev) > > +{ > > + struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv; > > + > > + init_dummy_netdev(&rtwpci->netdev); > > + netif_napi_add(&rtwpci->netdev, &rtwpci->napi, rtw_pci_napi_poll, > > + RTW_NAPI_WEIGHT_NUM); > > + napi_enable(&rtwpci->napi); > > +} > ... > > @@ -1547,6 +1624,8 @@ int rtw_pci_probe(struct pci_dev *pdev, > > goto err_destroy_pci; > > } > > > > + rtw_pci_napi_init(rtwdev); > > You're initializing NAPI after you've already established your ISR, > and your ISR might start scheduling NAPI. Even if that's unlikely > (because you haven't initiated any RX traffic yet), it seems like an > ordering problem -- shouldn't you initialize the NAPI device, then set > up the ISR, and only then call napi_enable()? > Will do it. Thanks for your advice. --- Ping-Ke