I've spent a considerately large amount of time trying to figure out why this works. The logic tells me that we should only make a page dirty (through SetPageDirty) if it gets mapped writable into the guest, but unfortunately with this seemlingly right behavior, VMs crashes on random memory faults under heavy memory pressure. Not sure this is a stable fix or if it simply hides the matter, but input on this is very welcome. Signed-off-by: Christoffer Dall <c.dall@xxxxxxxxxxxxxxxxxxxxxx> --- arch/arm/kvm/mmu.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index e741d1d..d44a514 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -573,10 +573,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, spin_unlock(&vcpu->kvm->arch.pgd_lock); out: - if (writable && !ret) - kvm_release_pfn_dirty(pfn); - else - kvm_release_pfn_clean(pfn); + /* + * XXX TODO FIXME: + * This is _really_ *weird* !!! + * We should only be calling the _dirty verison when we map something + * writable, but this causes memory failures in guests under heavy + * memory pressure on the host and heavy swapping. + */ + kvm_release_pfn_dirty(pfn); out_put_existing: if (!is_error_pfn(pfn_existing)) kvm_release_pfn_clean(pfn_existing); -- 1.7.9.5 _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm