Re: [PATCH vhost 3/6] virtio_net: replace private by pp struct inside page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 12, 2024 at 1:39 PM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote:
>
> On Fri, 12 Apr 2024 12:47:55 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote:
> > On Thu, Apr 11, 2024 at 10:51 AM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote:
> > >
> > > Now, we chain the pages of big mode by the page's private variable.
> > > But a subsequent patch aims to make the big mode to support
> > > premapped mode. This requires additional space to store the dma addr.
> > >
> > > Within the sub-struct that contains the 'private', there is no suitable
> > > variable for storing the DMA addr.
> > >
> > >                 struct {        /* Page cache and anonymous pages */
> > >                         /**
> > >                          * @lru: Pageout list, eg. active_list protected by
> > >                          * lruvec->lru_lock.  Sometimes used as a generic list
> > >                          * by the page owner.
> > >                          */
> > >                         union {
> > >                                 struct list_head lru;
> > >
> > >                                 /* Or, for the Unevictable "LRU list" slot */
> > >                                 struct {
> > >                                         /* Always even, to negate PageTail */
> > >                                         void *__filler;
> > >                                         /* Count page's or folio's mlocks */
> > >                                         unsigned int mlock_count;
> > >                                 };
> > >
> > >                                 /* Or, free page */
> > >                                 struct list_head buddy_list;
> > >                                 struct list_head pcp_list;
> > >                         };
> > >                         /* See page-flags.h for PAGE_MAPPING_FLAGS */
> > >                         struct address_space *mapping;
> > >                         union {
> > >                                 pgoff_t index;          /* Our offset within mapping. */
> > >                                 unsigned long share;    /* share count for fsdax */
> > >                         };
> > >                         /**
> > >                          * @private: Mapping-private opaque data.
> > >                          * Usually used for buffer_heads if PagePrivate.
> > >                          * Used for swp_entry_t if PageSwapCache.
> > >                          * Indicates order in the buddy system if PageBuddy.
> > >                          */
> > >                         unsigned long private;
> > >                 };
> > >
> > > But within the page pool struct, we have a variable called
> > > dma_addr that is appropriate for storing dma addr.
> > > And that struct is used by netstack. That works to our advantage.
> > >
> > >                 struct {        /* page_pool used by netstack */
> > >                         /**
> > >                          * @pp_magic: magic value to avoid recycling non
> > >                          * page_pool allocated pages.
> > >                          */
> > >                         unsigned long pp_magic;
> > >                         struct page_pool *pp;
> > >                         unsigned long _pp_mapping_pad;
> > >                         unsigned long dma_addr;
> > >                         atomic_long_t pp_ref_count;
> > >                 };
> > >
> > > On the other side, we should use variables from the same sub-struct.
> > > So this patch replaces the "private" with "pp".
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx>
> > > ---
> >
> > Instead of doing a customized version of page pool, can we simply
> > switch to use page pool for big mode instead? Then we don't need to
> > bother the dma stuffs.
>
>
> The page pool needs to do the dma by the DMA APIs.
> So we can not use the page pool directly.

I found this:

define PP_FLAG_DMA_MAP         BIT(0) /* Should page_pool do the DMA
                                        * map/unmap

It seems to work here?

Thanks

>
> Thanks.
>
>
> >
> > Thanks
> >
>






[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux