Re: [PATCH v3 08/12] KVM: Propagate vcpu explicitly to mark_page_dirty_in_slot()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 18 November 2021 18:50:55 GMT, Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>On Thu, Nov 18, 2021, Sean Christopherson wrote:
>> On Thu, Nov 18, 2021, David Woodhouse wrote:
>> > That leaves the one in TDP MMU handle_changed_spte_dirty_log() which
>> > AFAICT can trigger the same crash seen by butt3rflyh4ck — can't that
>> > happen from a thread where kvm_get_running_vcpu() is NULL too? For that
>> > one I'm not sure.
>> 
>> I think could be trigger in the TDP MMU via kvm_mmu_notifier_release()
>> -> kvm_mmu_zap_all(), e.g. if the userspace VMM exits while dirty logging is
>> enabled.  That should be easy to (dis)prove via a selftest.
>
>Scratch that, the dirty log update is guarded by the new_spte being present, so
>zapping of any kind won't trigger it.
>
>Currently, I believe the only path that would create a present SPTE without an
>active vCPU is mmu_notifer.change_pte, but that squeaks by because its required
>to be wrapped with invalidate_range_{start,end}(MMU_NOTIFY_CLEAR), and KVM zaps
>in that situation.

Is it sufficient to have *an* active vCPU?  What if a VMM has threads for active vCPUs but is doing mmap/munmap on a *different* thread? Does that not suffer the same crash?


-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.




[Index of Archives]     [LKML Archive]     [Linux ARM Kernel]     [Linux ARM]     [Git]     [Yosemite News]     [Linux SCSI]     [Linux Hams]

  Powered by Linux