On Sun, Mar 24, 2013 at 04:27:15PM -0700, H. Peter Anvin wrote: > On 03/24/2013 01:19 PM, Michael S. Tsirkin wrote: > >> struct virtio_pci_cap { > >> u8 cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */ > >> @@ -150,7 +153,9 @@ struct virtio_pci_common_cfg { > >> __le16 queue_size; /* read-write, power of 2. */ > >> __le16 queue_msix_vector;/* read-write */ > >> __le16 queue_enable; /* read-write */ > >> - __le16 queue_notify; /* read-only */ > >> + __le16 unused2; > >> + __le32 queue_notify_val;/* read-only */ > >> + __le32 queue_notify_off;/* read-only */ > >> __le64 queue_desc; /* read-write */ > >> __le64 queue_avail; /* read-write */ > >> __le64 queue_used; /* read-write */ > > > > So how exactly do the offsets mesh with the dual capability? For IO we > > want to use the same address and get queue from the data, for memory we > > want a per queue address ... > > > > How about having a readonly field which is "address increment per trigger"? > > The guest would be required to always write the queue number as the > data, however, the host would not be required to interpret it if the > address increment is nonzero? > > -hpa Not sure what increment means here. The interface that Rusty proposes reports queue offset for each queue. My question was that we probably can't affort offsets for IO since the address space is so restricted. Maybe rename memory_offset/memory_val and have it only apply to memory? -- MST _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization