Re: [PATCH] vhost/vsock: Use kvmalloc/kvfree for larger packets.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 28, 2022 at 04:02:12PM -0400, Michael S. Tsirkin wrote:
On Wed, Sep 28, 2022 at 05:11:35PM +0200, Stefano Garzarella wrote:
On Wed, Sep 28, 2022 at 05:31:58AM -0400, Michael S. Tsirkin wrote:
> On Wed, Sep 28, 2022 at 10:28:23AM +0200, Stefano Garzarella wrote:
> > On Wed, Sep 28, 2022 at 03:45:38PM +0900, Junichi Uekawa wrote:
> > > When copying a large file over sftp over vsock, data size is usually 32kB,
> > > and kmalloc seems to fail to try to allocate 32 32kB regions.
> > >
> > > Call Trace:
> > >  [<ffffffffb6a0df64>] dump_stack+0x97/0xdb
> > >  [<ffffffffb68d6aed>] warn_alloc_failed+0x10f/0x138
> > >  [<ffffffffb68d868a>] ? __alloc_pages_direct_compact+0x38/0xc8
> > >  [<ffffffffb664619f>] __alloc_pages_nodemask+0x84c/0x90d
> > >  [<ffffffffb6646e56>] alloc_kmem_pages+0x17/0x19
> > >  [<ffffffffb6653a26>] kmalloc_order_trace+0x2b/0xdb
> > >  [<ffffffffb66682f3>] __kmalloc+0x177/0x1f7
> > >  [<ffffffffb66e0d94>] ? copy_from_iter+0x8d/0x31d
> > >  [<ffffffffc0689ab7>] vhost_vsock_handle_tx_kick+0x1fa/0x301 [vhost_vsock]
> > >  [<ffffffffc06828d9>] vhost_worker+0xf7/0x157 [vhost]
> > >  [<ffffffffb683ddce>] kthread+0xfd/0x105
> > >  [<ffffffffc06827e2>] ? vhost_dev_set_owner+0x22e/0x22e [vhost]
> > >  [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3
> > >  [<ffffffffb6eb332e>] ret_from_fork+0x4e/0x80
> > >  [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3
> > >
> > > Work around by doing kvmalloc instead.
> > >
> > > Signed-off-by: Junichi Uekawa <uekawa@xxxxxxxxxxxx>
>
> My worry here is that this in more of a work around.
> It would be better to not allocate memory so aggressively:
> if we are so short on memory we should probably process
> packets one at a time. Is that very hard to implement?

Currently the "virtio_vsock_pkt" is allocated in the "handle_kick" callback
of TX virtqueue. Then the packet is multiplexed on the right socket queue,
then the user space can de-queue it whenever they want.

So maybe we can stop processing the virtqueue if we are short on memory, but
when can we restart the TX virtqueue processing?

Assuming you added at least one buffer, the time to restart would be
after that buffer has been used.

Yes, but we still might not have as many continuous pages to allocate, so I would use kvmalloc the same.

I agree that we should do better, I hope that moving to sk_buff will allow us to better manage allocation. Maybe after we merge that part we should spend some time to solve these problems.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux