Re: [PATCH v3 08/12] KVM: Propagate vcpu explicitly to mark_page_dirty_in_slot()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 18, 2021, David Woodhouse wrote:
> That leaves the one in TDP MMU handle_changed_spte_dirty_log() which
> AFAICT can trigger the same crash seen by butt3rflyh4ck — can't that
> happen from a thread where kvm_get_running_vcpu() is NULL too? For that
> one I'm not sure.

I think could be trigger in the TDP MMU via kvm_mmu_notifier_release()
-> kvm_mmu_zap_all(), e.g. if the userspace VMM exits while dirty logging is
enabled.  That should be easy to (dis)prove via a selftest.

And for the record :-)

On Mon, Dec 02, 2019 at 12:10:36PM -0800, Sean Christopherson wrote:
> IMO, adding kvm_get_running_vcpu() is a hack that is just asking for future
> abuse and the vcpu/vm/as_id interactions in mark_page_dirty_in_ring()
> look extremely fragile.

On 03/12/19 20:01, Sean Christopherson wrote:
> In case it was clear, I strongly dislike adding kvm_get_running_vcpu().
> IMO, it's a unnecessary hack.  The proper change to ensure a valid vCPU is
> seen by mark_page_dirty_in_ring() when there is a current vCPU is to
> plumb the vCPU down through the various call stacks.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux