Re: [PATCH 1/2] conf: add crash to hyperv features

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/11/2015 12:30 PM, Daniel P. Berrange wrote:
On Wed, Nov 11, 2015 at 12:09:33PM +0300, Denis V. Lunev wrote:
On 11/11/2015 11:57 AM, Dmitry Andreev wrote:
Adding colleague to CC

On 11.11.2015 11:34, Jiri Denemark wrote:
On Wed, Nov 11, 2015 at 08:34:02 +0100, Peter Krempa wrote:
On Thu, Nov 05, 2015 at 16:54:02 +0300, Dmitry Andreev wrote:
On 05.11.2015 14:06, Jiri Denemark wrote:
On Mon, Oct 26, 2015 at 13:23:32 +0300, Dmitry Andreev wrote:
Add crash CPU feature for Hyper-V. Hyper-V crash MSR's can be used
by Hyper-V based guests to notify about occurred guest crash.

XML:
<features>
    <hyperv>
      <crash state='on'/>
    </hyperv>
</features>
Sounds like this is related to an existing panic device we already
support. So what does enabling hv_crash do in QEMU? Is it an
additional
channel to a panic device or is the panic device still needed even if
hv_crash is enabled? In any case, I think we should map this
somehow to
the panic device instead of copying 1:1 the way QEMU enables
hv_crash.
pvpanic and Hyper-V crash are independent ways for guest to notify
about
OS crash. Both ways rise the 'qemu guest panicked' event. Domain can
have both hv_crash and pvpanic enabled at the same time.

pvpanic is in <devices> section in domain configuration because it
is an
ISA device. Hyper-V crash is a hypervisor's feature, which enables a
set
of model-specific registers. Guest can use this registers to send
notification and store additional information about a crash. This is a
part of Microsoft hypervisor interface.
Device or not, I don't really like having two distinct places to
configure similar functionality.

<device>
   <panic model='hyperv'/>

will do just fine IMO.
Yeah, this what I was thinking about. After all, we already do something
similar on Power. The guest panic notification is an integral part of
the platform itself so there's no device we need to add. To reflect this
in our domain XMLs, we just always add

     <device>
       <panic/>
     </device>

Thus using <panic> element for hv_crash seems like the best approach to
me. We'd have just one place for configuring all kinds of guest crash
notifications. The question is what the XML should look like. The form
suggested by Peter looks good, but then we should probably add the model
attribute to all panic devices to make it consistent. So a theoretical
XML using all currently supported panic "devices" would be:

     <device>
       <panic model='hyperv'/>  <!-- hv_crash -->
       <panic model='isa'/>     <!-- pvpanic -->
       <panic model='pseries'/> <!-- Power -->
     </device>

'pseries' model would only be allowed on Power, while the others would
only be allowed on x86. We'd need to automatically add model='...' to
existing device, but it should be pretty easy. Any older libvirt would
just ignore the attribute completely and a new libvirt would add
model='isa' on x86 and model='pseries' on Power.

Jirka

QEMU accepts this option through HyperV set of CPU options.
In QEMU this declared as follows:

static Property x86_cpu_properties[] = {
     DEFINE_PROP_BOOL("pmu", X86CPU, enable_pmu, false),
     { .name  = "hv-spinlocks", .info  = &qdev_prop_spinlocks },
     DEFINE_PROP_BOOL("hv-relaxed", X86CPU, hyperv_relaxed_timing, false),
     DEFINE_PROP_BOOL("hv-vapic", X86CPU, hyperv_vapic, false),
     DEFINE_PROP_BOOL("hv-time", X86CPU, hyperv_time, false),
     DEFINE_PROP_BOOL("hv-crash", X86CPU, hyperv_crash, false),
     DEFINE_PROP_BOOL("hv-reset", X86CPU, hyperv_reset, false),
     DEFINE_PROP_BOOL("hv-vpindex", X86CPU, hyperv_vpindex, false),
     DEFINE_PROP_BOOL("hv-runtime", X86CPU, hyperv_runtime, false),
     DEFINE_PROP_BOOL("check", X86CPU, check_cpuid, true),
     DEFINE_PROP_BOOL("enforce", X86CPU, enforce_cpuid, false),
     DEFINE_PROP_BOOL("kvm", X86CPU, expose_kvm, true),
     DEFINE_PROP_UINT32("level", X86CPU, env.cpuid_level, 0),
     DEFINE_PROP_UINT32("xlevel", X86CPU, env.cpuid_xlevel, 0),
     DEFINE_PROP_UINT32("xlevel2", X86CPU, env.cpuid_xlevel2, 0),
     DEFINE_PROP_STRING("hv-vendor-id", X86CPU, hyperv_vendor_id),
     DEFINE_PROP_END_OF_LIST()
};

Thus your proposal means that in this case we will have to add
this option to CPU part of the command line from devices
part.

Is this correct?

That is why we proposing to follow this way and keep HyperV
CPU properties in one place.

And, finally, 'no', this is not a device from QEMU perspective
of view.
Having libvirt model XML in exactly the same way as QEMU models
its command line is an explicit *non-goal* of libvirt.

Libvirt aims to provide an XML representation that is generic
across arbitrary hypervisor platforms. So it is fully expected
that we will model things differently from QEMU in some cases
where there is a better way todo things from the libvirt POV.

All that matters is that the approach we choose in libvirt
is sufficiently expressive to cover the range of features of
the underlying platforms.


Regards,
Daniel
ok. We will follow this way. No problem.

We have just need a clear agreement all parts will be
happy with this approach.

Thank you for proposal.

Den

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]