On 01/06/2010 05:16 PM, Anthony Liguori wrote:
On 01/06/2010 08:48 AM, Dor Laor wrote:
On 01/06/2010 04:32 PM, Avi Kivity wrote:
On 01/06/2010 04:22 PM, Michael S. Tsirkin wrote:
We can probably default -enable-kvm to -cpu host, as long as we
explain
very carefully that if users wish to preserve cpu features across
upgrades, they can't depend on the default.
Hardware upgrades or software upgrades?
Yes.
I just want to remind all the the main motivation for using -cpu
realModelThatWasOnceShiped is to provide correct cpu emulation for the
guest. Using a random qemu|kvm64+flag1-flag2 might really cause
trouble for the guest OS or guest apps.
On top of -cpu nehalem we can always add fancy features like x2apic, etc.
I think it boils down to, how are people going to use this.
For individuals, code names like Nehalem are too obscure. From my own
personal experience, even power users often have no clue whether there
processor is a Nehalem or not.
For management tools, Nehalem is a somewhat imprecise target because it
covers a wide range of potential processors. In general, I think what we
really need to do is simplify the process of going from, here's the
output of /proc/cpuinfo for a 100 nodes, what do I need to pass to qemu
so that migration always works for these systems.
I don't think -cpu nehalem really helps with that problem. -cpu none
helps a bit, but I hope we can find something nicer.
We can debate about the exact name/model to represent the Nehalem
family, I don't have an issue with that and actually Intel and Amd
should define it.
There are two main motivations behind the above approach:
1. Sound guest cpu definition.
Using a predefined model should automatically set all the relevant
vendor/stepping/cpuid flags/cache sizes/etc.
We just can let every management application deal with it. It breaks
guest OS/apps. For instance there are MSI support in windows guest
relay on the stepping.
2. Simplifying end user and mgmt tools.
qemu/kvm have the best knowledge about these low levels. If we push
it up in the stack, eventually it reaches the user. The end user,
not a 'qemu-devel user' which is actually far better from the
average user.
This means that such users will have to know what is popcount and
whether or not to limit migration on one host by adding sse4.2 or
not.
This is exactly what vmware are doing:
- Intel CPUs :
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
- AMD CPUs :
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992
Why should we invent the wheel (qemu64..)? Let's learn from their
experience.
This is the test description of the original patch by John:
# Intel
# -----
# Management layers remove pentium3 by default.
# It primarily remains here for testing of 32-bit migration.
#
[0:Pentium 3 Intel
:vmx
:pentium3;]
# Core 2, 65nm
# possible option sets: (+nx,+cx16), (+nx,+cx16,+ssse3)
#
1:Merom
:vmx,sse2
:qemu64,-nx,+sse2;
# Core2 45nm
#
2:Penryn
:vmx,sse2,nx,cx16,ssse3,sse4_1
:qemu64,+sse2,+cx16,+ssse3,+sse4_1;
# Core i7 45/32nm
#
3:Nehalem
:vmx,sse2,nx,cx16,ssse3,sse4_1,sse4_2,popcnt
:qemu64,+sse2,+cx16,+ssse3,+sse4_1,+sse4_2,+popcnt;
# AMD
# ---
# Management layers remove pentium3 by default.
# It primarily remains here for testing of 32-bit migration.
#
[0:Pentium 3 AMD
:svm
:pentium3;]
# Opteron 90nm stepping E1/E4/E6
# possible option sets: (-nx) for 130nm
#
1:Opteron G1
:svm,sse2,nx
:qemu64,+sse2;
# Opteron 90nm stepping F2/F3
#
2:Opteron G2
:svm,sse2,nx,cx16,rdtscp
:qemu64,+sse2,+cx16,+rdtscp;
# Opteron 65/45nm
#
3:Opteron G3
:svm,sse2,nx,cx16,sse4a,misalignsse,popcnt,abm
:qemu64,+sse2,+cx16,+sse4a,+misalignsse,+popcnt,+abm;
Regards,
Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html