On Sat, Dec 30, 2023 at 10:19:39AM -0600, Michael Roth wrote: > static int rmpupdate(u64 pfn, struct rmp_state *state) > { > unsigned long paddr = pfn << PAGE_SHIFT; > - int ret; > + int ret, level, npages; > > if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) > return -ENODEV; > > + level = RMP_TO_PG_LEVEL(state->pagesize); > + npages = page_level_size(level) / PAGE_SIZE; > + > + /* > + * If the kernel uses a 2MB directmap mapping to write to an address, > + * and that 2MB range happens to contain a 4KB page that set to private > + * in the RMP table, an RMP #PF will trigger and cause a host crash. > + * > + * Prevent this by removing pages from the directmap prior to setting > + * them as private in the RMP table. > + */ > + if (state->assigned && invalidate_direct_map(pfn, npages)) > + return -EFAULT; Well, the comment says one thing but the code does not do quite what the text says: * where is the check that pfn is part of the kernel direct map? * where is the check that this address is part of a 2M PMD in the direct map so that the situation is even given that you can have a 4K private PTE there? What this does is simply invalidate @npages unconditionally. Then, __set_pages_np() already takes a number of pages and a start address. Why is this thing iterating instead of sending *all* npages in one go? Yes, you'd need two new helpers, something like: set_pages_present(struct page *page, int numpages) set_pages_non_present(struct page *page, int numpages) which call the lowlevel counterparts. For some reason I remember asking this a while ago... -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette