Re: Re: Re: [PATCH] vduse: avoid using __GFP_NOFAIL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 12, 2024 at 3:00 PM Jason Wang <jasowang@xxxxxxxxxx> wrote:
>
> On Thu, Aug 8, 2024 at 6:52 PM Yongji Xie <xieyongji@xxxxxxxxxxxxx> wrote:
> >
> > On Thu, Aug 8, 2024 at 10:58 AM Jason Wang <jasowang@xxxxxxxxxx> wrote:
> > >
> > > On Wed, Aug 7, 2024 at 2:52 PM Yongji Xie <xieyongji@xxxxxxxxxxxxx> wrote:
> > > >
> > > > On Mon, Aug 5, 2024 at 4:21 PM Jason Wang <jasowang@xxxxxxxxxx> wrote:
> > > > >
> > > > > Barry said [1]:
> > > > >
> > > > > """
> > > > > mm doesn't support non-blockable __GFP_NOFAIL allocation. Because
> > > > > __GFP_NOFAIL without direct reclamation may just result in a busy
> > > > > loop within non-sleepable contexts.
> > > > > ""“
> > > > >
> > > > > Unfortuantely, we do that under read lock. A possible way to fix that
> > > > > is to move the pages allocation out of the lock into the caller, but
> > > > > having to allocate a huge number of pages and auxiliary page array
> > > > > seems to be problematic as well per Tetsuon [2]:
> > > > >
> > > > > """
> > > > > You should implement proper error handling instead of using
> > > > > __GFP_NOFAIL if count can become large.
> > > > > """
> > > > >
> > > > > So I choose another way, which does not release kernel bounce pages
> > > > > when user tries to register usersapce bounce pages. Then we don't need
> > > > > to do allocation in the path which is not expected to be fail (e.g in
> > > > > the release). We pay this for more memory usage but further
> > > > > optimizations could be done on top.
> > > > >
> > > > > [1] https://lore.kernel.org/all/CACGkMEtcOJAA96SF9B8m-nZ1X04-XZr+nq8ZQ2saLnUdfOGOLg@xxxxxxxxxxxxxx/T/#m3caef86a66ea6318ef94f9976ddb3a0ccfe6fcf8
> > > > > [2] https://lore.kernel.org/all/CACGkMEtcOJAA96SF9B8m-nZ1X04-XZr+nq8ZQ2saLnUdfOGOLg@xxxxxxxxxxxxxx/T/#m7ad10eaba48ade5abf2d572f24e185d9fb146480
> > > > >
> > > > > Fixes: 6c77ed22880d ("vduse: Support using userspace pages as bounce buffer")
> > > > > Signed-off-by: Jason Wang <jasowang@xxxxxxxxxx>
> > > > > ---
> > > >
> > > > Reviewed-by: Xie Yongji <xieyongji@xxxxxxxxxxxxx>
> > > > Tested-by: Xie Yongji <xieyongji@xxxxxxxxxxxxx>
> > >
> > > Thanks.
> > >
> > > >
> > > > Have tested it with qemu-storage-daemon [1]:
> > > >
> > > > $ qemu-storage-daemon \
> > > >     --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server=on,wait=off \
> > > >     --monitor chardev=charmonitor \
> > > >     --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0
> > > > \
> > > >     --export type=vduse-blk,id=vduse-test,name=vduse-test,node-name=disk0,writable=on
> > > >
> > > > [1] https://github.com/bytedance/qemu/tree/vduse-umem
> > >
> > > Great, would you want to post them to the Qemu?
> > >
> >
> > Looks like qemu-storage-daemon would not benefit from this feature
> > which is designed for some hugepage users such as SPDK/DPDK.
>
> Yes, but maybe for testing purposes like here?
>

OK for me.

Thanks,
Yongji





[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux