On Fri, Sep 08, 2023, Anish Moorthy wrote: > @@ -2318,4 +2324,33 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr) > /* Max number of entries allowed for each kvm dirty ring */ > #define KVM_DIRTY_RING_MAX_ENTRIES 65536 > > +/* > + * Attempts to set the run struct's exit reason to KVM_EXIT_MEMORY_FAULT and > + * populate the memory_fault field with the given information. > + * > + * WARNs and does nothing if the speculative exit canary has already been set > + * or if 'vcpu' is not the current running vcpu. > + */ > +static inline void kvm_handle_guest_uaccess_fault(struct kvm_vcpu *vcpu, > + uint64_t gpa, uint64_t len, uint64_t flags) After a lot of fiddling and leading you on a wild goose chase, I think the least awful name is kvm_prepare_memory_fault_exit(). Like kvm_prepare_emulation_failure_exit(), this doesn't actually "handle" anything, it just preps for the exit. If it actually returned something then maybe kvm_handle_guest_uaccess_fault() would be an ok name (IIRC, that was my original intent, but we wandered in a different direction). And peeking at future patches, pass in the RWX flags as bools, that way this helper can deal with the bools=>flags conversion. Oh, and fill the flags with bitwise ORs, that way future conflicts with private memory will be trivial to resolve. E.g. static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, gpa_t gpa, gpa_t size, bool is_write, bool is_exec) { vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; vcpu->run->memory_fault.gpa = gpa; vcpu->run->memory_fault.size = size; vcpu->run->memory_fault.flags = 0; if (is_write) vcpu->run->memory_fault.flags |= KVM_MEMORY_FAULT_FLAG_WRITE; else if (is_exec) vcpu->run->memory_fault.flags |= KVM_MEMORY_FAULT_FLAG_EXEC; else vcpu->run->memory_fault.flags |= KVM_MEMORY_FAULT_FLAG_READ; } > +{ > + /* > + * Ensure that an unloaded vCPU's run struct isn't being modified "unloaded" isn't accurate, e.g. the vCPU could be loaded, just not on this vCPU. I'd just drop the comment entirely, this one is fairly self-explanatory. > + */ > + if (WARN_ON_ONCE(vcpu != kvm_get_running_vcpu())) > + return; > + > + /* > + * Warn when overwriting an already-populated run struct. > + */ For future reference, use this style /* * */ only if the comment spans multiple lines. For single line comments, just: /* Warn when overwriting an already-populated run struct. */ > + WARN_ON_ONCE(vcpu->speculative_exit_canary != KVM_SPEC_EXIT_UNUSED); As mentioned in the guest_memfd thread[1], this WARN can be triggered by userspace, e.g. by getting KVM to fill the union but not exit, which is sadly not too hard because emulator_write_phys() incorrectly treats all failures as MMIO. I'm not even sure how to fix that in a race-free, sane way. E.g. rechecking the memslots doesn't work because a memslot could appear between __kvm_write_guest_page() failing and rechecking in emulator_read_write_onepage(). Hmm, maybe we could get away with returning a different errno, e.g. -ENXIO? And then emulator_write_phys() and emulator_read_write_onepage() can be taught to handle different errors accordingly. Anyways, I highly recommend just dropping the canary for now, trying to clean up the emulator and get this fully functional probably won't be a smooth process. > diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h > index f089ab290978..d19aa7965392 100644 > --- a/tools/include/uapi/linux/kvm.h > +++ b/tools/include/uapi/linux/kvm.h > @@ -278,6 +278,9 @@ struct kvm_xen_exit { > /* Flags that describe what fields in emulation_failure hold valid data. */ > #define KVM_INTERNAL_ERROR_EMULATION_FLAG_INSTRUCTION_BYTES (1ULL << 0) > > +/* KVM_CAP_MEMORY_FAULT_INFO flag for kvm_run.flags */ > +#define KVM_RUN_MEMORY_FAULT_FILLED (1 << 8) > + > /* for KVM_RUN, returned by mmap(vcpu_fd, offset=0) */ > struct kvm_run { > /* in */ > @@ -531,6 +534,27 @@ struct kvm_run { > struct kvm_sync_regs regs; > char padding[SYNC_REGS_SIZE_BYTES]; > } s; > + > + /* > + * This second exit union holds structs for exits which may be triggered > + * after KVM has already initiated a different exit, and/or may be > + * filled speculatively by KVM. > + * > + * For instance, because of limitations in KVM's uAPI, a memory fault > + * may be encounterd after an MMIO exit is initiated and exit_reason and > + * kvm_run.mmio are filled: isolating the speculative exits here ensures > + * that KVM won't clobber information for the original exit. > + */ > + union { > + /* KVM_RUN_MEMORY_FAULT_FILLED + EFAULT */ > + struct { > + __u64 flags; > + __u64 gpa; > + __u64 len; > + } memory_fault; > + /* Fix the size of the union. */ > + char speculative_exit_padding[256]; > + }; > }; As proposed in the guest_memfd thread[2], I think we should scrap the second union and just commit to achieving 100% accuracy only for page fault paths in the initial merge. I'll send you a clean-ish patch to use as a starting point sometime next week. [1] https://lore.kernel.org/all/ZRtxoaJdVF1C2Mvy@xxxxxxxxxx [2] https://lore.kernel.org/all/ZQ3AmLO2SYv3DszH@xxxxxxxxxx