Re: [PATCH net-next 2/4] net: page_pool: add bulk support for ptr_ring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 29 Oct 2020 11:31:48 +0100
Lorenzo Bianconi <lorenzo.bianconi@xxxxxxxxxx> wrote:

> > On Tue, 27 Oct 2020 20:04:08 +0100
> > Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote:
> >   
> > > +void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> > > +			     int count)
> > > +{
> > > +	struct page *page_ring[XDP_BULK_QUEUE_SIZE];  
> > 
> > Maybe we could reuse the 'data' array instead of creating a new array
> > (2 cache-lines long) for the array of pages?  
> 
> I agree, I will try to reuse the data array for that
> 
> >   
> > > +	int i, len = 0;
> > > +
> > > +	for (i = 0; i < count; i++) {
> > > +		struct page *page = virt_to_head_page(data[i]);
> > > +
> > > +		if (unlikely(page_ref_count(page) != 1 ||
> > > +			     !pool_page_reusable(pool, page))) {
> > > +			page_pool_release_page(pool, page);
> > > +			put_page(page);
> > > +			continue;
> > > +		}
> > > +
> > > +		if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> > > +			page_pool_dma_sync_for_device(pool, page, -1);  
> > 
> > Here we sync the entire DMA area (-1), which have a *huge* cost for
> > mvneta (especially on EspressoBin HW).  For this xdp_frame->len is
> > unfortunately not enough.  We will need the *maximum* length touch by
> > (1) CPU and (2) remote device DMA engine.  DMA-TX completion knows the
> > length for (2).  The CPU length (1) is max of original xdp_buff size
> > and xdp_frame->len, because BPF-helpers could have shrinked the size.
> > (tricky part is that xdp_frame->len isn't correct in-case of header
> > adjustments, thus like mvneta_run_xdp we to calc dma_sync size, and
> > store this in xdp_frame, maybe via param to xdp_do_redirect). Well, not
> > sure if it is too much work to transfer this info, for this use-case.  
> 
> I was thinking about that but I guess point (1) is tricky since "cpu length"
> can be changed even in the middle by devmaps or cpumaps (not just in the driver
> rx napi loop). I guess we can try to address this point in a subsequent series.
> Agree?

I agree, that this change request goes beyond this series.  But it
becomes harder and harder to add later when this API is getting used in
more and more drivers.  Looking at 1/4 is can be extended later, as you
just pass down xdpf in API driver use (and then queue xdpf->data).

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux