Ohad Ben-Cohen <ohad@xxxxxxxxxx> writes: > On Tue, Apr 23, 2013 at 6:46 AM, Rusty Russell <rusty@xxxxxxxxxxxxxxx> wrote: >> Oh, we can break everything :) >> >> I was concentrating purely on the mechanics of the virtqueue, mainly >> because vhost has special needs wrt tracking changes. vhost doesn't use >> vringh yet because my patches are slightly suboptimal (I stick with the >> vhost API, just replace the guts with vringh). Michael has a >> simplification of vhost-net pending, which will make altering this much >> easier. >> >> But CAIF isn't the right thing to optimize for, either. It's weird to >> have both host and guest rings at the same time, and I don't see other >> users doing that (ie. vhost_net and tcm_vhost). But if we can make it >> easier for you without overly uglifying vringh, that'd be great. > > Thanks. > > Today with one application processor talking to one or several remote > cores we live well with guest rings, but future SoCs seems to be > having an increasing number of on-chip cores which all talk to each > other. Managing this matrix of communications with guest rings is > somewhat cumbersome - it requires deciding, for every two cores, who > is "the guest" and who is "the host". As the number of edges in this > graph increases, this would be increasingly harder to develop, set up > and debug. > > In such environments it makes sense to have, for each pair of on-chip > cores, 1 guest and 1 host ring. This way each core will maintain its > own TX buffers and send a buffer across whenever it has a pending > outbound message. This also works well with systems where each core > has its own memory which only it can write to, and from which others > could only read. > > So I expect additional users for this paradigm CAIF has adopted - > probably rpmsg at the very least - which makes it even more appealing > to clean up nicely. Last year I discussed this at least with Loic > (STE) and Suman (TI) and both companies were actively developing this > for their future SoCs - I'm cc'ing both in case there are any updates. Perhaps we should add a new struct virtio_pair (?), with associated ops, which works on top of the existing stuff. That may be too many levels of abstraction, but it feels right. I don't mind Sjur's current simple code; there's nothing magical about upstream APIs, and I'm happy to merge it now and add an abstraction later. It's a pretty common practice, but it's your call. Cheers, Rusty. _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization