Re: I{S,C}ACTIVER implemention question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 06/04/2020 16:14, Marc Zyngier wrote:
Hi Julien,

Hi Marc,


Thanks for the heads up.

On 2020-04-06 14:16, Julien Grall wrote:
Hi,

Xen community is currently reviewing a new implementation for reading
I{S,C}ACTIVER registers (see [1]).

The implementation is based on vgic_mmio_read_active() in KVM, i.e the
active state of the interrupts is based on the vGIC state stored in
memory.

While reviewing the patch on xen-devel, I noticed a potential deadlock
at least with Xen implementation. I know that Xen vGIC and KVM vGIC
are quite different, so I looked at the implementation to see how this
is dealt.

With my limited knowledge of KVM, I wasn't able to rule it out. I am
curious to know if I missed anything.

vCPU A may read the active state of an interrupt routed to vCPU B.
When vCPU A is reading the state, it will read the state stored in
memory.

The only way the memory state can get synced with the HW state is when
vCPU B exit guest context.

AFAICT, vCPU B will not exit when deactivating HW mapped interrupts
and virtual edge interrupts. So vCPU B may run for an abritrary long
time before been exiting and syncing the memory state with the HW
state.

So while I agree that this is definitely not ideal, I don't think we end-up
with a deadlock (or rather a livelock) either. That's because we are guaranteed
to exit eventually if only because the kernel's own timer interrupt (or any
other host interrupt routed to the same physical CPU) will fire and get us
out of there. On its own, this is enough to allow the polling vcpu to make
forward progress.

That's a good point. I think in Xen we can't rely on this because in some of the setup (such as a pCPU dedicated to a vCPU), there will be close to zero host interrupts (timer is only used for scheduling).


Now, it is obvious that we should improve on the current situation. I just
hacked together a patch that provides the same guarantees as the one we
already have on the write side (kick all vcpus out of the guest, snapshot
the state, kick everyone back in). I boot-tested it, so it is obviously perfect
and won't eat your data at all! ;-)

Thank you for the patch! This is the similar to what I had in mind.

Cheers,

--
Julien Grall
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux