On 4/13/22 12:28, David Hildenbrand wrote: > On 13.04.22 10:25, Vlastimil Babka wrote: >> On 3/29/22 18:04, David Hildenbrand wrote: >>> the pin will be fully reliable and stay consistent with the pages >>> mapped into the page table, as the bit cannot get cleared (e.g., by >>> fork(), KSM) while the page is pinned. For anonymous pages that >>> are mapped R/W, PG_anon_exclusive can be assumed to always be set >>> because such pages cannot possibly be shared. >>> >>> The page table lock protecting the page table entry is the primary >>> synchronization mechanism for PG_anon_exclusive; GUP-fast that does >>> not take the PT lock needs special care when trying to clear the >>> flag. >>> >>> Page table entry types and PG_anon_exclusive: >>> * Present: PG_anon_exclusive applies. >>> * Swap: the information is lost. PG_anon_exclusive was cleared. >>> * Migration: the entry holds this information instead. >>> PG_anon_exclusive was cleared. >>> * Device private: PG_anon_exclusive applies. >>> * Device exclusive: PG_anon_exclusive applies. >>> * HW Poison: PG_anon_exclusive is stale and not changed. >>> >>> If the page may be pinned (FOLL_PIN), clearing PG_anon_exclusive is >>> not allowed and the flag will stick around until the page is freed >>> and folio->mapping is cleared. >> >> Or also if it's unpinned? > > I'm afraid I didn't get your question. Once the page is no longer > pinned, we can succeed in clearing PG_anon_exclusive (just like pinning > never happened). Does that answer your question? Yeah it looked like a scenario that's oddly missing in that description, yet probably obvious. Now I feel it's indeed obvious, so nevermind :) >>> We won't be clearing PG_anon_exclusive on destructive unmapping (i.e., >>> zapping) of page table entries, page freeing code will handle that when >>> also invalidate page->mapping to not indicate PageAnon() anymore. >>> Letting information about exclusivity stick around will be an important >>> property when adding sanity checks to unpinning code. >>> >>> Note that we properly clear the flag in free_pages_prepare() via >>> PAGE_FLAGS_CHECK_AT_PREP for each individual subpage of a compound page, >>> so there is no need to manually clear the flag. >>> >>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> >> >> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> > > Thanks! > >> >>> --- a/mm/memory.c >>> +++ b/mm/memory.c >>> @@ -3663,6 +3663,17 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> goto out_nomap; >>> } >>> >>> + /* >>> + * PG_anon_exclusive reuses PG_mappedtodisk for anon pages. A swap pte >>> + * must never point at an anonymous page in the swapcache that is >>> + * PG_anon_exclusive. Sanity check that this holds and especially, that >>> + * no filesystem set PG_mappedtodisk on a page in the swapcache. Sanity >>> + * check after taking the PT lock and making sure that nobody >>> + * concurrently faulted in this page and set PG_anon_exclusive. >>> + */ >>> + BUG_ON(!PageAnon(page) && PageMappedToDisk(page)); >>> + BUG_ON(PageAnon(page) && PageAnonExclusive(page)); >>> + >> >> Hmm, dunno why not VM_BUG_ON? > > Getting PageAnonExclusive accidentally set by a file system would result > in an extremely unpleasant security issue. I most surely want to catch > something like that in any case, especially in the foreseeable future. OK then.