Ingo Molnar wrote: > btw., while we have everyone on the phone and talking ;) Technologically > it would save us a whole lot of trouble in Linux if 'external' > hypervisors could standardize around a single ABI - such as VMI. Is > there any deep reason why Xen couldnt use VMI to talk to Linux? I > suspect a range of VMI vectors could be set aside for Xen's dom0 (and > other) APIs that have no current VMI equivalent - if there's broad > agreement on the current 60+ base VMI vectors that center around basic > x86 CPU capabilities - which make up the largest portion of our > paravirtualization complexity. Pipe dream? Well, we went around this about six months ago, and decided the best way forward is the current paravirt_ops approach. The Xen and VMI interfaces are quite different, and have different design goals. VMI is fairly low-level, and is approximately a software implementation of VT/SVM with a couple of extra bits, whereas the Xen interface is intended to be higher level, with the expectation that the guest cooperates more with its virtualization. They have similarities by necessity, but they have some fairly basic differences. You could come up with some shim layer which makes the two interfaces appear similar, and you could spell the name of that shim "VMI". Or you could call it "paravirt_ops", which is the name we chose. And you could implement the interface to that layer as a binary ABI, or you could make it a normal source-level Linux kernel interface, which is what we chose to do. Either way, its still one of native hardware, VMI/ESX, Xen or something else under the interface layer, and you need to test for each case regardless of what the interface looks like. J