On 05/27/09 19:09, Ed Swierk wrote:
On Wed, May 27, 2009 at 9:28 AM, Avi Kivity<avi@xxxxxxxxxx> wrote:
Will it actually solve the problem?
- can all hypercalls that can be issued with
pv-on-hvm-on-kvm-with-a-side-order-of-fries be satisfied from userspace?
Yes.
- what about connecting the guest driver to xen netback one day? we don't
want to go through userspace for that.
You can't without emulation tons of xen stuff in-kernel.
Current situation:
* Guest does xen hypercalls. We can handle that just fine.
* Host userspace (backends) calls libxen*, where the xen hypercall
calls are hidden. We can redirect the library calls via LD_PRELOAD
(standalone xenner) or function pointers (qemuified xenner) and do
something else instead.
Trying to use in-kernel xen netback driver adds this problem:
* Host kernel does xen hypercalls. Ouch. We have to emulate them
in-kernel (otherwise using in-kernel netback would be a quite
pointless exercise).
One way or another, the MSR somehow has to map in a chunk of data
supplied by userspace. Are you suggesting an alternative to the PIO
hack?
Well, the "chunk of data" is on disk anyway:
$libdir/xenner/hvm{32,64}.bin
So a possible plan to attack could be "ln -s $libdir/xenner
/lib/firmware", let kvm.ko grab it if needed using
request_firmware("xenner/hvm${bits}.bin"), and a few lines of kernel
code handling the wrmsr. Logic is just this:
void xenner_wrmsr(uint64_t val, int longmode)
{
uint32_t page = val & ~PAGE_MASK;
uint64_t paddr = val & PAGE_MASK;
uint8_t *blob = longmode ? hvm64 : hvm32;
cpu_physical_memory_write(paddr, blob + page * PAGE_SIZE,
PAGE_SIZE);
}
Well, you'll have to sprinkle in blob loading and caching and some error
checking. But even with that it is probably hard to beat in actual code
size. Additional plus is we get away without a new ioctl then.
Comments?
cheers,
Gerd
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html