Re: [RFC 7/11] virtio_pci: new, capability-aware driver.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 11, 2012 at 10:55:52AM +1030, Rusty Russell wrote:
> On Tue, 10 Jan 2012 19:03:36 +0200, "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote:
> > On Wed, Dec 21, 2011 at 11:03:25AM +1030, Rusty Russell wrote:
> > > Yes.  The idea that we can alter fields in the device-specific config
> > > area is flawed.  There may be cases where it doesn't matter, but as an
> > > idea it was holed to begin with.
> > > 
> > > We can reduce probability by doing a double read to check, but there are
> > > still cases where it will fail.
> > 
> > Okay - want me to propose an interface for that?
> 
> Had a brief chat with BenH (CC'd).
> 
> I think we should deprecate writing to the config space.  Only balloon
> does it AFAICT, and I can't quite figure out *why* it has an 'active'
> field.

Are you sure? I think net writes a mac address.

> This solves half the problem, of sync guest writes.  For the
> other half, I suggest a generation counter; odd means inconsistent.  The
> guest can poll.

So we get the counter until it's even, get the config, if it's changed
repeat? Yes it works. However, I would like to have a way to detect
config change just by looking at memory. ATM we need to read ISR to
know.  If we used a VQ, the advantage would be that the device can work
with a single MSIX vector shared by all VQs.

If we do require config VQ anyway, why not use it to notify
guest of config changes? Guest could pre-post an in buffer
and host uses that.


> BenH also convinced me we should finally make the config space LE if
> we're going to change things.  Since PCI is the most common transport,
> guest-endian confuses people.  And it sucks for really weird machines.

Are we going to keep guest endian for e.g. virtio net header?
If yes the benefit of switching config space is not that big.
And changes in devices would affect non-PCI transports.

> We should also change the ring (to a single ring, I think).  Descriptors
> to 24 bytes long (8 byte cookie, 8 byte addr, 4 byte len, 4 byte flags).
> We might be able to squeeze it into 20 bytes but that means packing.  We
> should support inline, chained or indirect.  Let the other side ack by
> setting flag, cookie and len (if written).

Quite possibly all or some of these things help performance
but do we have to change the spec before we have experimental
proof?

I did experiment with a single ring using tools/virtio and
I didn't see a measureable performance gain. Two rings
do have the advantage of not requiring host side copy,
which copy would surely add to cache pressure.  Since
host doesn't change desriptors, we could also
preformat some descriptors in the current design.
There is a fragmentation problem in theory but it can be alleviated with
a smart allocator. About inline - it can only help very small buffers.
Which workloads do you have in mind exactly?


> Moreover, I think we should make all these changes at once (at least, in
> the spec).  That makes it a big change, and it'll take longer to
> develop, but makes it easy in the long run to differentiate legacy and
> modern virtio.
> 
> Thoughts?
> Rusty.

BTW this seems to be the reverse from what you have in Mar 2001,
see 87mxkjls61.fsf@xxxxxxxxxxxxxxx :)

I am much less concerned with what we do for configuration,
but I do not believe we have learned all performance lessons
from virtio ring1. Is there any reason why we shouldn't be
able to experiment with inline within virtio1 and see
whether that gets us anything?
If we do a bunch of changes to the ring at once, we can't
figure out what's right, what's wrong, or back out of
mistakes later.

Since there are non PCI transports that use the ring,
we really shouldn't make both the configuration and
the ring changes depend on the same feature bit.

-- 
MST
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux