On 03/10/2010 07:13 PM, Anthony Liguori wrote:
On 03/10/2010 03:25 AM, Avi Kivity wrote:
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using
mmio callbacks instead of directly, and issue the appropriate
barriers there.
Not good enough unless you want to severely restrict the use of
shared memory within the guest.
For instance, it's going to be useful to assume that you atomic
instructions remain atomic. Crossing architecture boundaries here
makes these assumptions invalid. A barrier is not enough.
You could make the mmio callbacks flow to the shared memory server
over the unix-domain socket, which would then serialize them. Still
need to keep RMWs as single operations. When the host supports it,
implement the operation locally (you can't render cmpxchg16b on i386,
for example).
But now you have a requirement that the shmem server runs in lock-step
with the guest VCPU which has to happen for every single word of data
transferred.
Alternative implementation: expose a futex in a shared memory object and
use that to serialize access. Now all accesses happen from vcpu
context, and as long as there is no contention, should be fast, at least
relative to tcg.
You're much better off using a bulk-data transfer API that relaxes
coherency requirements. IOW, shared memory doesn't make sense for TCG
:-)
Rather, tcg doesn't make sense for shared memory smp. But we knew that
already.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html