On Thu, Jul 25, 2024 at 12:27 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > > On Wed 24-07-24 20:55:40, Barry Song wrote: > > From: Barry Song <v-songbaohua@xxxxxxxx> > > > > mm doesn't support non-blockable __GFP_NOFAIL allocation. Because > > __GFP_NOFAIL without direct reclamation may just result in a busy > > loop within non-sleepable contexts. > > > > static inline struct page * > > __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > > struct alloc_context *ac) > > { > > ... > > /* > > * Make sure that __GFP_NOFAIL request doesn't leak out and make sure > > * we always retry > > */ > > if (gfp_mask & __GFP_NOFAIL) { > > /* > > * All existing users of the __GFP_NOFAIL are blockable, so warn > > * of any new users that actually require GFP_NOWAIT > > */ > > if (WARN_ON_ONCE_GFP(!can_direct_reclaim, gfp_mask)) > > goto fail; > > ... > > } > > ... > > fail: > > warn_alloc(gfp_mask, ac->nodemask, > > "page allocation failure: order:%u", order); > > got_pg: > > return page; > > } > > > > Let's move the memory allocation out of the atomic context and use > > the normal sleepable context to get pages. > > > > [RFC]: This has only been compile-tested; I'd prefer if the VDPA maintainers > > handles it. > > > > Cc: "Michael S. Tsirkin" <mst@xxxxxxxxxx> > > Cc: Jason Wang <jasowang@xxxxxxxxxx> > > Cc: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> > > Cc: "Eugenio Pérez" <eperezma@xxxxxxxxxx> > > Cc: Maxime Coquelin <maxime.coquelin@xxxxxxxxxx> > > Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx> > > --- > > drivers/vdpa/vdpa_user/iova_domain.c | 24 ++++++++++++++++++++---- > > 1 file changed, 20 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c > > index 791d38d6284c..eff700e5f7a2 100644 > > --- a/drivers/vdpa/vdpa_user/iova_domain.c > > +++ b/drivers/vdpa/vdpa_user/iova_domain.c > > @@ -287,28 +287,44 @@ void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain) > > { > > struct vduse_bounce_map *map; > > unsigned long i, count; > > + struct page **pages = NULL; > > > > write_lock(&domain->bounce_lock); > > if (!domain->user_bounce_pages) > > goto out; > > - > > count = domain->bounce_size >> PAGE_SHIFT; > > + write_unlock(&domain->bounce_lock); > > + > > + pages = kmalloc_array(count, sizeof(*pages), GFP_KERNEL | __GFP_NOFAIL); > > + for (i = 0; i < count; i++) > > + pages[i] = alloc_page(GFP_KERNEL | __GFP_NOFAIL); > > AFAICS vduse_domain_release calls this function with > spin_lock(&domain->iotlb_lock) so dropping &domain->bounce_lock is not > sufficient. yes. this is true: static int vduse_domain_release(struct inode *inode, struct file *file) { struct vduse_iova_domain *domain = file->private_data; spin_lock(&domain->iotlb_lock); vduse_iotlb_del_range(domain, 0, ULLONG_MAX); vduse_domain_remove_user_bounce_pages(domain); vduse_domain_free_kernel_bounce_pages(domain); spin_unlock(&domain->iotlb_lock); put_iova_domain(&domain->stream_iovad); put_iova_domain(&domain->consistent_iovad); vhost_iotlb_free(domain->iotlb); vfree(domain->bounce_maps); kfree(domain); return 0; } This is quite a pain. I admit I don't have knowledge of this driver, and I don't think it's safe to release two locks and then reacquire them. The situation is rather complex. Therefore, I would prefer if the VDPA maintainers could take the lead in implementing a proper fix. > > -- > Michal Hocko > SUSE Labs Thanks Barry