On 10.04.2009, at 19:08, Mihail Panev wrote:
Hi Alex,
Alexander Graf wrote:
So question #1: Is this the right thing to start, and if yes, what's
the story behind that name? I ran across some qemu-system-i386 on
google, but my compile did not produce such a binary.
Yes, the binary is called "qemu-system-x86_64". The story behind all
this is somewhere in the archive of this ML.
Yeah, I had already checked that out before I posted, but what I
found
only seems to clarify the difference between qemu-<arch> and
qemu-system-<arch>. What baffles me most is why the heck does it
generate a x86_64 target on a i686 system?!
Or has KVM dropped support for 32-bit x86 as a guest platform
altogether, relying on x86_64's legacy/compatibility mode? That would
make sense, but on the other hand it could mislead into thinking that
you can run a 64-bit guest on it, which wouldn't work when the host is
actually 32-bit (like in my case). Or do I miss something here?
Please don't question the usefulness of the decision. It is called
"qemu-system-x86_64" for both i386 and x86_64 alike.
This will probably go away
when qemu and kvm-userspace merge some day.
I actually thought that had already happened, after I read the
changelog for kvm-84. Now I see that I must have interpreted "merge
qemu-svn" the other way round as it was meant :-)
kvm-userspace merges in upstream qemu changes from time to time. Also
some developers try to push their changes in kvm-userspace to upstream
qemu. The process is not finished yet though, as you still have 2
distinct trees.
I agree that this is not exactly obvious, but I wanted to guard the
code
as heavily as possible :-).
Guard it? Is that still experimental? I thought this was enabled by
default...
Yes, it's experimental.
Actually it would have been more obvious if there was a manpage
documenting that. However, my build did not produce any manpages,
although the configure script said "Manual directory /usr/local/
share/man".
Even if so, I think that needing to set the module parameter
explicitly is enough of a safeguard. Apropos parameter, I'd rather
name
it something more self-explanatory like "nested_virt" or
"nested_svm" or
something. Most people, including me, usually associate "nested" with
NPT, when it comes to virtualization.
I was imagining a world where you would always have the feature
enabled and choose its activation via -cpu. Per default -cpu is set to
"compatible" - a CPU that can be migrated even across intel and AMD
platforms. If you choose -cpu barcelona or so you get SVM features in
the guest.
For now it's the way you see it though and will be until I find time
to make the code work perfectly ;-).
And while we are at parameters, kvm-intel.ko's enable/disable
parameters are bool, which actually makes more sense. I think it
wouldn't hurt to make that consistent across modules, i.e. either turn
amd's to bool, or intel's to int.
Also, keep in mind that for now only KVM in KVM works for me. Getting
for example Xen running should be definitely doable - it just didn't
work for me last time I tried. Speed is not exactly great yet either.
After I "discovered" the -enable-nesting thing, I tried it and it
worked fine. More precisely, I was able to start another guest within
the guest, and it ran OK in text mode. As soon as it booted gdm, it
hanged. I used the cirrus vga for the nested guest. The (outer) guest
was a standard Debian Lenny, so the nested guest used the kvm version
shipped with that, which is kvm-72. That actually shouldn't matter,
but
who knows...
At that point, I somehow managed to shoot off the sshd, since I was
doing that over SSH to my university machine. Now the remote machine
seems to run, but the SSH daemon is down. Thus, I cannot reconnect to
make further tests. Due to Easter holidays, I have no physical
access to
the machine right now, so I will be able to follow up on the matter
from
Tuesday onwards.
Btw, SDL performance over SSH X-forwarding just plain sucks, even
on a
100MBit network.
Yep - there was a discussion on qemu-devel about this.
VNC is a bit better, but there I have an issue with a
"double mouse pointer", having the local one as a black dot and the
remote one the usual way, and they get terribly out of sync all the
time. It's an enormous PITA! Do you experience that too? It happens
here
no matter how I set the "render mouse pointer locally" setting in
xvnc.
This is because you have several conversion layers here. Normal VNC
just sets the mouse pointer on the remote display. With a VM you
emulate a normal mouse though, that only knows things like "move left
by 20". That means you're stuck with a relative input device, which
will always bring you the "double mouse cursor" effect.
If you want to have an absolute input device, try to use "vmmouse"
instead of "mouse" as input driver in your xorg.conf.
PS: I'm offline for 2 weeks now, so don't expect any reply for me in
the meantime :-).
Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html