Re: [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big packets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 07, 2022 at 02:33:02PM +0000, Parav Pandit wrote:
> 
> > From: Michael S. Tsirkin <mst@xxxxxxxxxx>
> > Sent: Wednesday, September 7, 2022 10:30 AM
> 
> [..]
> > > > actually how does this waste space? Is this because your device does
> > > > not have INDIRECT?
> > > VQ is 256 entries deep.
> > > Driver posted total of 256 descriptors.
> > > Each descriptor points to a page of 4K.
> > > These descriptors are chained as 4K * 16.
> > 
> > So without indirect then? with indirect each descriptor can point to 16
> > entries.
> > 
> With indirect, can it post 256 * 16 size buffers even though vq depth is 256 entries?
> I recall that total number of descriptors with direct/indirect descriptors is limited to vq depth.


> Was there some recent clarification occurred in the spec to clarify this?


This would make INDIRECT completely pointless.  So I don't think we ever
had such a limitation.
The only thing that comes to mind is this:

	A driver MUST NOT create a descriptor chain longer than the Queue Size of
	the device.

but this limits individual chain length not the total length
of all chains.

We have an open bug that we forgot to include this requirement
in the packed ring documentation.

-- 
MST

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux