Re: [PATCH] KVM: selftests: Wait mprotect_ro_done before write to RO in mmu_stress_test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 26, 2025, Yan Zhao wrote:
> On Tue, Feb 25, 2025 at 05:48:39PM -0800, Sean Christopherson wrote:
> > On Sat, Feb 08, 2025, Yan Zhao wrote:
> > > In the read-only mprotect() phase of mmu_stress_test, ensure that
> > > mprotect(PROT_READ) has completed before the guest starts writing to the
> > > read-only mprotect() memory.
> > > 
> > > Without waiting for mprotect_ro_done before the guest starts writing in
> > > stage 3 (the stage for read-only mprotect()), the host's assertion of stage
> > > 3 could fail if mprotect_ro_done is set to true in the window between the
> > > guest finishing writes to all GPAs and executing GUEST_SYNC(3).
> > > 
> > > This scenario is easy to occur especially when there are hundred of vCPUs.
> > > 
> > > CPU 0                  CPU 1 guest     CPU 1 host
> > >                                        enter stage 3's 1st loop
> > >                        //in stage 3
> > >                        write all GPAs
> > >                        @rip 0x4025f0
> > > 
> > > mprotect(PROT_READ)
> > > mprotect_ro_done=true
> > >                        GUEST_SYNC(3)
> > >                                        r=0, continue stage 3's 1st loop
> > > 
> > >                        //in stage 4
> > >                        write GPA
> > >                        @rip 0x402635
> > > 
> > >                                        -EFAULT, jump out stage 3's 1st loop
> > >                                        enter stage 3's 2nd loop
> > >                        write GPA
> > >                        @rip 0x402635
> > >                                        -EFAULT, continue stage 3's 2nd loop
> > >                                        guest rip += 3
> > > 
> > > The test then fails and reports "Unhandled exception '0xe' at guest RIP
> > > '0x402638'", since the next valid guest rip address is 0x402639, i.e. the
> > > "(mem) = val" in vcpu_arch_put_guest() is compiled into a mov instruction
> > > of length 4.
> > 
> > This shouldn't happen.  On x86, stage 3 is a hand-coded "mov %rax, (%rax)", not
> > vcpu_arch_put_guest().  Either something else is going on, or __x86_64__ isn't
> > defined?
> stage 3 is hand-coded "mov %rax, (%rax)", but stage 4 is with
> vcpu_arch_put_guest().
> 
> The original code expects that "mov %rax, (%rax)" in stage 3 can produce
> -EFAULT, so that in the host thread can jump out of stage 3's 1st vcpu_run()
> loop.

Ugh, I forgot that there are two loops in stage-3.  I tried to prevent this race,
but violated my own rule of not using arbitrary delays to avoid races.

Completely untested, but I think this should address the problem (I'll test
later today; you already did the hard work of debugging).  The only thing I'm
not positive is correct is making the first _vcpu_run() a one-off instead of a
loop.

diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index d9c76b4c0d88..9ac1800bb770 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -18,6 +18,7 @@
 #include "ucall_common.h"
 
 static bool mprotect_ro_done;
+static bool vcpu_hit_ro_fault;
 
 static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
 {
@@ -36,9 +37,9 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
 
        /*
         * Write to the region while mprotect(PROT_READ) is underway.  Keep
-        * looping until the memory is guaranteed to be read-only, otherwise
-        * vCPUs may complete their writes and advance to the next stage
-        * prematurely.
+        * looping until the memory is guaranteed to be read-only and a fault
+        * has occured, otherwise vCPUs may complete their writes and advance
+        * to the next stage prematurely.
         *
         * For architectures that support skipping the faulting instruction,
         * generate the store via inline assembly to ensure the exact length
@@ -56,7 +57,7 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
 #else
                        vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);
 #endif
-       } while (!READ_ONCE(mprotect_ro_done));
+       } while (!READ_ONCE(mprotect_ro_done) && !READ_ONCE(vcpu_hit_ro_fault));
 
        /*
         * Only architectures that write the entire range can explicitly sync,
@@ -148,12 +149,13 @@ static void *vcpu_worker(void *data)
         * be stuck on the faulting instruction for other architectures.  Go to
         * stage 3 without a rendezvous
         */
-       do {
-               r = _vcpu_run(vcpu);
-       } while (!r);
+       r = _vcpu_run(vcpu);
        TEST_ASSERT(r == -1 && errno == EFAULT,
                    "Expected EFAULT on write to RO memory, got r = %d, errno = %d", r, errno);
 
+       /* Tell the vCPU it hit a RO fault. */
+       WRITE_ONCE(vcpu_hit_ro_fault, true);
+
 #if defined(__x86_64__) || defined(__aarch64__)
        /*
         * Verify *all* writes from the guest hit EFAULT due to the VMA now
@@ -378,7 +380,6 @@ int main(int argc, char *argv[])
        rendezvous_with_vcpus(&time_run2, "run 2");
 
        mprotect(mem, slot_size, PROT_READ);
-       usleep(10);
        mprotect_ro_done = true;
        sync_global_to_guest(vm, mprotect_ro_done);





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux