RE: [PATCH 2/3] X86: Add a check to catch Xen emulation of Hyper-V

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@xxxxxxxxxxxxx]
> Sent: Friday, February 01, 2013 8:20 AM
> To: H. Peter Anvin
> Cc: Jan Beulich; KY Srinivasan; olaf@xxxxxxxxx; bp@xxxxxxxxx;
> apw@xxxxxxxxxxxxx; x86@xxxxxxxxxx; tglx@xxxxxxxxxxxxx;
> devel@xxxxxxxxxxxxxxxxxxxxxx; gregkh@xxxxxxxxxxxxxxxxxxx;
> jasowang@xxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH 2/3] X86: Add a check to catch Xen emulation of Hyper-V
> 
> On Thu, 31 Jan 2013, H. Peter Anvin wrote:
> > On 01/30/2013 12:53 AM, Jan Beulich wrote:
> > >
> > > I'm not convinced that's the right approach - any hypervisor
> > > could do similar emulation, and hence you either want to make
> > > sure you run on Hyper-V (by excluding all others), or you
> > > tolerate using the emulation (which may require syncing up with
> > > the other guest implementations so that shared resources don't
> > > get used by two parties).
> > >
> > > I also wonder whether using the Hyper-V emulation (where
> > > useful, there might not be anything right now, but this may
> > > change going forward) when no Xen support is configured
> > > wouldn't be better than not using anything...
> > >
> >
> > I'm confused about "the right approach" here is.  As far as I
> > understand, this only can affect a Xen guest where HyperV guest support
> > is enabled but not Xen support, and only because Xen emulates HyperV but
> > does so incorrectly.
> >
> > This is a Xen bug, and as such it makes sense to reject Xen
> > specifically.  If another hypervisor emulates HyperV and does so
> > correctly there seems to be no reason to reject it.
> 
> I don't think so.
> 
> AFAIK originally there were features exported as flags and Xen doesn't
> turn on the flags that correspond to features that are not implemented.
> The problem here is that Hyper-V is about to introduce a feature without
> a flag that is not implemented by Xen (see "X86: Deliver Hyper-V
> interrupts on a separate IDT vector").
> K.Y. please confirm if I got this right.

I am not sure I can agree with you here. There are two discriminating factors
here: (a) Hypervisor check and (b) Feature check. Not every feature of the
hypervisor can be surfaced as feature bit and furthermore, just because a feature
bit is turned on, it does not necessarily mean that the feature is to be used. For instance,
let us say that Windows guests begin to use the "partition counter" and Xen chooses
to implement that to better support Windows. This does not mean that while hosting
Linux on Xen, you want to plug in a clock source based on the emulated
"partition counter". This is what would happen in the code we have today.

Other Hypervisors emulating Hyper-V do not have this problem and Xen would not either
if the emulation bit is selectively turned on (only while running Windows) or if Xen were allowed
to check first ahead of Hyper-V (unconditionally) in the hypervisor detection code. As Peter pointed out, we 
have this problem because of the unique situation with Xen.

In any event, I am not going to further argue this issue; this last round of patches I sent out,
fixes the issue for Xen. Jan wants me to make this check more general. While I don't think
we need to do that, I will see if I can do it. I am checking to see if MSFT guarantees that Hyper-V
would initialize the unused CPUID space to 0. If this is the case, I will implement the check
Jan has suggested; if not, we have to live with the Xen specific check that I currently have.

> 
> If I were the Microsoft engineer implementing this feature, no matter
> what Xen does or does not, I would also make sure that there is a
> corresponding flag for it, because in my experience they avoid future
> headaches.
> I wonder what happens if you run Linux with Hyper-V support on an old
> Hyper-V host that doesn't support vector injection.
> 

To answer your specific question, this feature of being able to distribute vmbus 
interrupt load across all VCPUs in the guest is a win8 and beyond feature. On prior
hosts, all interrupts will be delivered on the boot CPU. VMBUS, as part of connecting with
the hosts determines host supported protocol version and decides how it wants to
program the hypervisor with regards to interrupt delivery. Even though we might setup
an IDT entry for delivering the hypervisor interrupt, if the host is a pre-win8 host, the vmbus
driver will program the hypervisor to deliver the interrupt on the boot CPU via a legacy interrupt
vector.

Regards,

K. Y

_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/devel


[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux