Re: [PATCH v4 1/3] vduse: avoid using __GFP_NOFAIL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 2, 2024 at 4:30 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> On 02.09.24 09:58, Jason Wang wrote:
> > On Mon, Sep 2, 2024 at 3:33 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
> >>
> >> On 30.08.24 22:28, Barry Song wrote:
> >>> From: Jason Wang <jasowang@xxxxxxxxxx>
> >>>
> >>> mm doesn't support non-blockable __GFP_NOFAIL allocation. Because
> >>> persisting in providing __GFP_NOFAIL services for non-block users
> >>> who cannot perform direct memory reclaim may only result in an
> >>> endless busy loop.
> >>>
> >>> Therefore, in such cases, the current mm-core may directly return
> >>> a NULL pointer:
> >>>
> >>> static inline struct page *
> >>> __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >>>                                                   struct alloc_context *ac)
> >>> {
> >>>           ...
> >>>           if (gfp_mask & __GFP_NOFAIL) {
> >>>                   /*
> >>>                    * All existing users of the __GFP_NOFAIL are blockable, so warn
> >>>                    * of any new users that actually require GFP_NOWAIT
> >>>                    */
> >>>                   if (WARN_ON_ONCE_GFP(!can_direct_reclaim, gfp_mask))
> >>>                           goto fail;
> >>>                   ...
> >>>           }
> >>>           ...
> >>> fail:
> >>>           warn_alloc(gfp_mask, ac->nodemask,
> >>>                           "page allocation failure: order:%u", order);
> >>> got_pg:
> >>>           return page;
> >>> }
> >>>
> >>> Unfortuantely, vpda does that nofail allocation under non-sleepable
> >>> lock. A possible way to fix that is to move the pages allocation out
> >>> of the lock into the caller, but having to allocate a huge number of
> >>> pages and auxiliary page array seems to be problematic as well per
> >>> Tetsuon: " You should implement proper error handling instead of
> >>> using __GFP_NOFAIL if count can become large."
> >>>
> >>> So I choose another way, which does not release kernel bounce pages
> >>> when user tries to register userspace bounce pages. Then we can
> >>> avoid allocating in paths where failure is not expected.(e.g in
> >>> the release). We pay this for more memory usage as we don't release
> >>> kernel bounce pages but further optimizations could be done on top.
> >>>
> >>> Fixes: 6c77ed22880d ("vduse: Support using userspace pages as bounce buffer")
> >>> Reviewed-by: Xie Yongji <xieyongji@xxxxxxxxxxxxx>
> >>> Tested-by: Xie Yongji <xieyongji@xxxxxxxxxxxxx>
> >>> Signed-off-by: Jason Wang <jasowang@xxxxxxxxxx>
> >>> [v-songbaohua@xxxxxxxx: Refine the changelog]
> >>> Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx>
> >>> ---
> >>>    drivers/vdpa/vdpa_user/iova_domain.c | 19 +++++++++++--------
> >>>    drivers/vdpa/vdpa_user/iova_domain.h |  1 +
> >>>    2 files changed, 12 insertions(+), 8 deletions(-)
> >>>
> >>> diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> >>> index 791d38d6284c..58116f89d8da 100644
> >>> --- a/drivers/vdpa/vdpa_user/iova_domain.c
> >>> +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> >>> @@ -162,6 +162,7 @@ static void vduse_domain_bounce(struct vduse_iova_domain *domain,
> >>>                                enum dma_data_direction dir)
> >>>    {
> >>>        struct vduse_bounce_map *map;
> >>> +     struct page *page;
> >>>        unsigned int offset;
> >>>        void *addr;
> >>>        size_t sz;
> >>> @@ -178,7 +179,10 @@ static void vduse_domain_bounce(struct vduse_iova_domain *domain,
> >>>                            map->orig_phys == INVALID_PHYS_ADDR))
> >>>                        return;
> >>>
> >>> -             addr = kmap_local_page(map->bounce_page);
> >>> +             page = domain->user_bounce_pages ?
> >>> +                    map->user_bounce_page : map->bounce_page;
> >>> +
> >>> +             addr = kmap_local_page(page);
> >>>                do_bounce(map->orig_phys + offset, addr + offset, sz, dir);
> >>>                kunmap_local(addr);
> >>>                size -= sz;
> >>> @@ -270,9 +274,8 @@ int vduse_domain_add_user_bounce_pages(struct vduse_iova_domain *domain,
> >>>                                memcpy_to_page(pages[i], 0,
> >>>                                               page_address(map->bounce_page),
> >>>                                               PAGE_SIZE);
> >>> -                     __free_page(map->bounce_page);
> >>>                }
> >>> -             map->bounce_page = pages[i];
> >>> +             map->user_bounce_page = pages[i];
> >>>                get_page(pages[i]);
> >>>        }
> >>>        domain->user_bounce_pages = true;
> >>> @@ -297,17 +300,17 @@ void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain)
> >>>                struct page *page = NULL;
> >>>
> >>>                map = &domain->bounce_maps[i];
> >>> -             if (WARN_ON(!map->bounce_page))
> >>> +             if (WARN_ON(!map->user_bounce_page))
> >>>                        continue;
> >>>
> >>>                /* Copy user page to kernel page if it's in use */
> >>>                if (map->orig_phys != INVALID_PHYS_ADDR) {
> >>> -                     page = alloc_page(GFP_ATOMIC | __GFP_NOFAIL);
> >>> +                     page = map->bounce_page;
> >>
> >> Why don't we need a kmap_local_page(map->bounce_page) here, but we might
> >> perform one / have performed one in vduse_domain_bounce?
> >
> > I think it's another bug that needs to be fixed.
> >
> > Yongji, do you want to fix this?
>
> Or maybe it works because "map->bounce_page" is now always a kernel
> page,

Yes, the userspace bounce page is not user_bounce_page.

> and never one from user space that might reside in highmem.

Right. So we are actually fine :)

Thanks

>
> --
> Cheers,
>
> David / dhildenb
>






[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux