On 3/29/22 18:04, David Hildenbrand wrote: > The basic question we would like to have a reliable and efficient answer > to is: is this anonymous page exclusive to a single process or might it > be shared? We need that information for ordinary/single pages, hugetlb > pages, and possibly each subpage of a THP. > > Introduce a way to mark an anonymous page as exclusive, with the > ultimate goal of teaching our COW logic to not do "wrong COWs", whereby > GUP pins lose consistency with the pages mapped into the page table, > resulting in reported memory corruptions. > > Most pageflags already have semantics for anonymous pages, however, > PG_mappedtodisk should never apply to pages in the swapcache, so let's > reuse that flag. > > As PG_has_hwpoisoned also uses that flag on the second tail page of a > compound page, convert it to PG_error instead, which is marked as > PF_NO_TAIL, so never used for tail pages. > > Use custom page flag modification functions such that we can do > additional sanity checks. The semantics we'll put into some kernel doc > in the future are: > > " > PG_anon_exclusive is *usually* only expressive in combination with a > page table entry. Depending on the page table entry type it might > store the following information: > > Is what's mapped via this page table entry exclusive to the > single process and can be mapped writable without further > checks? If not, it might be shared and we might have to COW. > > For now, we only expect PTE-mapped THPs to make use of > PG_anon_exclusive in subpages. For other anonymous compound > folios (i.e., hugetlb), only the head page is logically mapped and > holds this information. > > For example, an exclusive, PMD-mapped THP only has PG_anon_exclusive > set on the head page. When replacing the PMD by a page table full > of PTEs, PG_anon_exclusive, if set on the head page, will be set on > all tail pages accordingly. Note that converting from a PTE-mapping > to a PMD mapping using the same compound page is currently not > possible and consequently doesn't require care. > > If GUP wants to take a reliable pin (FOLL_PIN) on an anonymous page, > it should only pin if the relevant PG_anon_bit is set. In that case, ^ PG_anon_exclusive bit ? > the pin will be fully reliable and stay consistent with the pages > mapped into the page table, as the bit cannot get cleared (e.g., by > fork(), KSM) while the page is pinned. For anonymous pages that > are mapped R/W, PG_anon_exclusive can be assumed to always be set > because such pages cannot possibly be shared. > > The page table lock protecting the page table entry is the primary > synchronization mechanism for PG_anon_exclusive; GUP-fast that does > not take the PT lock needs special care when trying to clear the > flag. > > Page table entry types and PG_anon_exclusive: > * Present: PG_anon_exclusive applies. > * Swap: the information is lost. PG_anon_exclusive was cleared. > * Migration: the entry holds this information instead. > PG_anon_exclusive was cleared. > * Device private: PG_anon_exclusive applies. > * Device exclusive: PG_anon_exclusive applies. > * HW Poison: PG_anon_exclusive is stale and not changed. > > If the page may be pinned (FOLL_PIN), clearing PG_anon_exclusive is > not allowed and the flag will stick around until the page is freed > and folio->mapping is cleared. Or also if it's unpinned? > " > > We won't be clearing PG_anon_exclusive on destructive unmapping (i.e., > zapping) of page table entries, page freeing code will handle that when > also invalidate page->mapping to not indicate PageAnon() anymore. > Letting information about exclusivity stick around will be an important > property when adding sanity checks to unpinning code. > > Note that we properly clear the flag in free_pages_prepare() via > PAGE_FLAGS_CHECK_AT_PREP for each individual subpage of a compound page, > so there is no need to manually clear the flag. > > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3663,6 +3663,17 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > goto out_nomap; > } > > + /* > + * PG_anon_exclusive reuses PG_mappedtodisk for anon pages. A swap pte > + * must never point at an anonymous page in the swapcache that is > + * PG_anon_exclusive. Sanity check that this holds and especially, that > + * no filesystem set PG_mappedtodisk on a page in the swapcache. Sanity > + * check after taking the PT lock and making sure that nobody > + * concurrently faulted in this page and set PG_anon_exclusive. > + */ > + BUG_ON(!PageAnon(page) && PageMappedToDisk(page)); > + BUG_ON(PageAnon(page) && PageAnonExclusive(page)); > + Hmm, dunno why not VM_BUG_ON? > /* > * Remove the swap entry and conditionally try to free up the swapcache. > * We're already holding a reference on the page but haven't mapped it > diff --git a/mm/memremap.c b/mm/memremap.c > index af0223605e69..4264f78299a8 100644