On Fri, Jan 05, 2024 at 10:30:23AM -0800, mhkelley58@xxxxxxxxx wrote: > From: Michael Kelley <mhklinux@xxxxxxxxxxx> > > In preparation for temporarily marking pages not present during a > transition between encrypted and decrypted, use slow_virt_to_phys() > in the hypervisor callback. As long as the PFN is correct, > slow_virt_to_phys() works even if the leaf PTE is not present. > The existing functions that depend on vmalloc_to_page() all > require that the leaf PTE be marked present, so they don't work. > > Update the comments for slow_virt_to_phys() to note this broader usage > and the requirement to work even if the PTE is not marked present. > > Signed-off-by: Michael Kelley <mhklinux@xxxxxxxxxxx> > --- > arch/x86/hyperv/ivm.c | 9 ++++++++- > arch/x86/mm/pat/set_memory.c | 13 +++++++++---- > 2 files changed, 17 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c > index 02e55237d919..8ba18635e338 100644 > --- a/arch/x86/hyperv/ivm.c > +++ b/arch/x86/hyperv/ivm.c > @@ -524,7 +524,14 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo > return false; > > for (i = 0, pfn = 0; i < pagecount; i++) { > - pfn_array[pfn] = virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE); > + /* > + * Use slow_virt_to_phys() because the PRESENT bit has been > + * temporarily cleared in the PTEs. slow_virt_to_phys() works > + * without the PRESENT bit while virt_to_hvpfn() or similar > + * does not. > + */ > + pfn_array[pfn] = slow_virt_to_phys((void *)kbuffer + > + i * HV_HYP_PAGE_SIZE) >> HV_HYP_PAGE_SHIFT; I think you can make it much more readable by introducing few variables: virt = (void *)kbuffer + i * HV_HYPPAGE_SIZE; phys = slow_virt_to_phys(virt); pfn_array[pfn] = phys >> HV_HYP_PAGE_SHIFT; > pfn++; > > if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) { > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index bda9f129835e..8e19796e7ce5 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -755,10 +755,15 @@ pmd_t *lookup_pmd_address(unsigned long address) > * areas on 32-bit NUMA systems. The percpu areas can > * end up in this kind of memory, for instance. > * > - * This could be optimized, but it is only intended to be > - * used at initialization time, and keeping it > - * unoptimized should increase the testing coverage for > - * the more obscure platforms. > + * It is also used in callbacks for CoCo VM page transitions between private > + * and shared because it works when the PRESENT bit is not set in the leaf > + * PTE. In such cases, the state of the PTEs, including the PFN, is otherwise > + * known to be valid, so the returned physical address is correct. The similar > + * function vmalloc_to_pfn() can't be used because it requires the PRESENT bit. > + * > + * This could be optimized, but it is only used in paths that are not perf > + * sensitive, and keeping it unoptimized should increase the testing coverage > + * for the more obscure platforms. > */ > phys_addr_t slow_virt_to_phys(void *__virt_addr) > { > -- > 2.25.1 > -- Kiryl Shutsemau / Kirill A. Shutemov