On 2015-02-16 09:57, Marc Zyngier wrote: > On 15/02/15 19:03, Jan Kiszka wrote: >> On 2015-02-15 19:01, Jan Kiszka wrote: >>> On 2015-02-15 16:30, Marc Zyngier wrote: >>>> On Sun, Feb 15 2015 at 3:07:50 pm GMT, Jan Kiszka >>>> <jan.kiszka@xxxxxx> wrote: >>>>> On 2015-02-15 15:59, Marc Zyngier wrote: >>>>>> On Sun, Feb 15 2015 at 2:40:40 pm GMT, Jan Kiszka >>>>>> <jan.kiszka@xxxxxx> wrote: >>>>>>> On 2015-02-15 14:37, Marc Zyngier wrote: >>>>>>>> On Sun, Feb 15 2015 at 8:53:30 am GMT, Jan Kiszka >>>>>>>> <jan.kiszka@xxxxxx> wrote: >>>>>>>>> I'm now throwing trace_printk at my broken KVM. Already >>>>>>>>> found out that I get ARM_EXCEPTION_IRQ every few 10 µs. >>>>>>>>> Not seeing any irq_* traces, though. Weird. >>>>>>>> >>>>>>>> This very much looks like a screaming interrupt. At such >>>>>>>> a rate, no wonder your VM make much progress. Can you >>>>>>>> find out which interrupt is screaming like this? Looking >>>>>>>> at GICC_HPPIR should help, but you'll have to map the CPU >>>>>>>> interface in HYP before being able to access it there. >>>>>>> >>>>>>> OK... let me figure this out. I had this suspect as well - >>>>>>> the host gets a VM exit for each injected guest IRQ? >>>>>> >>>>>> Not exactly. There is a VM exit for each physical interrupt >>>>>> that fires while the guest is running. Injecting an interrupt >>>>>> also causes a VM exit, as we force the vcpu to reload its >>>>>> context. >>>>> >>>>> Ah, GICC != GICV - you are referring to host-side pending IRQs. >>>>> Any hints on how to get access to that register would >>>>> accelerate the analysis (ARM KVM code is still new to me). >>>> >>>> Map the GICC region in HYP using create_hyp_io_mapping (see >>>> vgic_v2_probe for an example of how we map GICH), and stash the >>>> read of GICC_HPPIR before leaving HYP mode (and before saving the >>>> guest timer). >> >>> Hacked on it until it started to work. The result delivered >>> initially are 0x002 or 0x01e. Then, when the guest gets stuck, I >>> have 0x01b most of the time (a few 0x01e arrive when there is a >>> real host irq). The virtual timer on speed? >> >>> Wait, there is also early printk for ARM, but it was off in my >>> guest! Turning it on confirms we have some problems here: >> >>> Architected timer frequency not available Division by zero in >>> kernel. >> >>> When in emulation mode, I get: >> >>> Architected cp15 timer(s) running at 62.50MHz (virt). >> >>> Digging deeper. >> >> U-Boot didn't initialize CNTFRQ on cores 1..3. Fixing this, the guest >> passes early boot reliably, now hangs much later (RCU stalls are >> detected by the guest). > > Right, that explains a lot of things. Can you describe a bit more what > you're seeing now? Sorry, should have updated this thread: http://thread.gmane.org/gmane.comp.emulators.kvm.arm.devel/17 This issue is no longer KVM-related. What might be KVM-related, or also a QEMU issue, is broken framebuffer support once KVM is enable in QEMU. Not yet reported, will do soon on qemu-devel. Jan
Attachment:
signature.asc
Description: OpenPGP digital signature