Re: [1/1] KVM: restrict kvm_gfn_to_hva_cache_init() to only accept address ranges within one page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 4/22/2024 7:49 PM, Kele Huang wrote:
> Function kvm_gfn_to_hva_cache_init() is exported and used to init
> gfn to hva cache at various places, such as called in function
> kvm_pv_enable_async_pf().  However, this function directly tail
> calls function __kvm_gfn_to_hva_cache_init(), which assigns
> ghc->memslot to NULL and returns 0 for cache initialization of
> cross pages cache.  This is unsafe as 0 typically means a successful
> return, but it actually fails to return a valid ghc->memslot.
> The functions call kvm_gfn_to_hva_cache_init(), such as
> kvm_lapic_set_vapicz_addr() do not make future checking on the
> ghc->memslot if kvm_gfn_to_hva_cache_init() returns a 0.  Moreover,
> other developers may try to initialize a cache across pages by
> calling this function but fail with a success return value.
> 
> This patch fixes this issue by explicitly restricting function
> kvm_gfn_to_hva_cache_init() to only accept address ranges within
> one page and adding comments to the function accordingly.
> 

For cross page cache, returning a zero is not really a mistake, since it
verifies the entire region against the memslot and does initialize ghc.

The nullified memslot is to indicate that the read or write should
follow the slow path, and it's the caller's responsibility to check
ghc->memslot if needed.  e.g., kvm_read_guest_offset_cached() and
kvm_write_guest_offset_cached().  Please refer to commit fcfbc617547f
("KVM: Check for a bad hva before dropping into the ghc slow path")




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux