Re: arm: warning at virt/kvm/arm/vgic.c:1468

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2015-02-15 16:30, Marc Zyngier wrote:
> On Sun, Feb 15 2015 at  3:07:50 pm GMT, Jan Kiszka <jan.kiszka@xxxxxx> wrote:
>> On 2015-02-15 15:59, Marc Zyngier wrote:
>>> On Sun, Feb 15 2015 at  2:40:40 pm GMT, Jan Kiszka <jan.kiszka@xxxxxx> wrote:
>>>> On 2015-02-15 14:37, Marc Zyngier wrote:
>>>>> On Sun, Feb 15 2015 at 8:53:30 am GMT, Jan Kiszka
>>>>> <jan.kiszka@xxxxxx> wrote:
>>>>>> I'm now throwing trace_printk at my broken KVM. Already found out that I
>>>>>> get ARM_EXCEPTION_IRQ every few 10 µs. Not seeing any irq_* traces,
>>>>>> though. Weird.
>>>>>
>>>>> This very much looks like a screaming interrupt. At such a rate, no
>>>>> wonder your VM make much progress. Can you find out which interrupt is
>>>>> screaming like this? Looking at GICC_HPPIR should help, but you'll have
>>>>> to map the CPU interface in HYP before being able to access it there.
>>>>
>>>> OK... let me figure this out. I had this suspect as well - the host gets
>>>> a VM exit for each injected guest IRQ?
>>>
>>> Not exactly. There is a VM exit for each physical interrupt that fires
>>> while the guest is running. Injecting an interrupt also causes a VM
>>> exit, as we force the vcpu to reload its context.
>>
>> Ah, GICC != GICV - you are referring to host-side pending IRQs. Any
>> hints on how to get access to that register would accelerate the
>> analysis (ARM KVM code is still new to me).
> 
> Map the GICC region in HYP using create_hyp_io_mapping (see
> vgic_v2_probe for an example of how we map GICH), and stash the read of
> GICC_HPPIR before leaving HYP mode (and before saving the guest timer).

OK.

> 
> BTW, when you look at /proc/interrupts on the host, don't you see an
> interrupt that's a bit too eager to fire?

No - but that makes sense given that we do not enter any interrupt
handler according to ftrace, thus there can't be any counter incrementation.

> 
>>>> BTW, I also tried with in-kernel GIC disabled (in the kernel config),
>>>> but I guess that's pointless. Linux seems to be stuck on a
>>>> non-functional architectural timer then, right?
>>>
>>> Yes. Useful for bringup, but nothing more.
>>
>> Maybe we should perform a feature check and issue a warning from QEMU?
> 
> I'd assume this is already in place (but I almost never run QEMU, so I
> could be wrong here).

Nope, QEMU starts up fine, just lets the guest starve while waiting for
jiffies to increase.

> 
>>> I still wonder if the 4+1 design on the K1 is not playing tricks behind
>>> our back. Having talked to Ian Campbell earlier this week, he also can't
>>> manage to run guests in Xen on this platform, so there's something
>>> rather fishy here.
>>
>> Interesting. The announcements of his PSCI patches [1] sounded more
>> promising. Maybe it was only referring to getting the hypervisor itself
>> running...
> 
> This is my understanding so far.
> 
>> To my current (still limited understanding) of that platform would say
>> that this little core is parked after power-up of the main APs. And as
>> we do not power them down, there is no reason to perform a cluster
>> switch or anything similarly nasty, no?
> 
> I can't see why this would happen, but I've learned not to assume
> anything when it come to braindead creativity on the HW side...

True.

Jan


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux