Re: [Qemu-devel] Modern CPU models cannot be used with libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/12/2012 09:50 PM, Anthony Liguori wrote:
On 03/12/2012 02:12 PM, Itamar Heim wrote:
On 03/12/2012 09:01 PM, Anthony Liguori wrote:

It's a trade off. From a RAS perspective, it's helpful to have
information about the host available in the guest.

If you're already exposing a compatible family, exposing the actual
processor seems to be worth the extra effort.

only if the entire cluster is (and will be?) identical cpu.

At least in my experience, this isn't unusual.

or if you don't care about live migration i guess, which could be hte
case for
clouds, then again, not sure a cloud provider would want to expose the
physical
cpu to the tenant.

Depends on the type of cloud you're building, I guess.

ovirt allows to set "cpu family" per cluster. assume tomorrow it could
do it an
even more granular way.
it could also do it automatically based on subset of flags on all
hosts - but
would it really make sense to expose a set of capabilities which
doesn't exist
in the real world (which iiuc, is pretty much aligned with the cpu
families?),
that users understand?

No, I think the lesson we've learned in QEMU (the hard way) is that
exposing a CPU that never existed will cause something to break. Often
times, that something is glibc or GCC which tends to be rather epic in
terms of failure.

good to hear - I think this is the important part.
so from that perspective, cpu families sounds the right abstraction
for general
use case to me.
for ovirt, could improve on smaller/dynamic subsets of migration
domains rather
than current clusters
and sounds like you would want to see "expose host cpu for non migratable
guests, or for identical clusters".

Would it be possible to have a "best available" option in oVirt-engine
that would assume that all processors are of the same class and fail an
attempt to add something that's an older class?

I think that most people probably would start with "best available" and
then after adding a node fails, revisit the decision and start lowering
the minimum CPU family (I'm assuming that it's possible to modify the
CPU family over time).

iirc, the original implementation for cpu family was start with an empty family, and use the best match from the first host added to the cluster.
not sure if that's still the behavior though.
worth mentioning the cpu families in ovirt have a 'sort' field to allow starting from best available. and you can change the cpu family of a cluster today as well (with some validations hosts in the cluster match up)


 From a QEMU perspective, I think that means having per-family CPU
options and then Alex's '-cpu best'. But presumably it's also necessary
to be able to figure out in virsh capabilities what '-cpu best' would be.

if sticking to cpu families, updating the config with name/prioirty of the families twice a year (or by user) seems good enough to me...


Regards,

Anthony Liguori

_______________________________________________
Arch mailing list
Arch@xxxxxxxxx
http://lists.ovirt.org/mailman/listinfo/arch



--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list


[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]