On Wed, Nov 30, 2022 at 09:36:15AM -0800, Hugh Dickins wrote: > On Wed, 30 Nov 2022, Shakeel Butt wrote: > > > > 2. For 6.2 (or 6.3), remove the non-present pte migration with some > > additional text in the warning and do the rmap cleanup. > > I just had an idea for softening the impact of that change: a moment's > more thought may prove it's a terrible idea, but right now I like it. > > What if we keep the non-present pte migration throughout the deprecation > period, but with a change to the where the folio_trylock() is done, and > a refusal to move the charge on the page of a non-present pte, if that > page/folio is currently mapped anywhere else - the folio lock preventing > it from then becoming mapped while in mem_cgroup_move_account(). I would like that better too. Charge moving has always been lossy (because of trylocking the page, and having to isolate it), but categorically leaving private swap pages behind seems like a bit much to sneak in quietly. > There's an argument that that's a better implementation anyway: that > we should not interfere with others' pages; but perhaps it would turn > out to be unimplementable, or would make for less predictable behaviour. Hm, I think the below should work for swap pages. Do you see anything obviously wrong with it, or scenarios I haven't considered? @@ -5637,6 +5645,46 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma, * we call find_get_page() with swapper_space directly. */ page = find_get_page(swap_address_space(ent), swp_offset(ent)); + + /* + * Don't move shared charges. This isn't just for saner move + * semantics, it also ensures that page_mapped() is stable for + * the accounting in mem_cgroup_mapcount(). + * + * We have to serialize against the following paths: fork + * (which may copy a page map or a swap pte), fault (which may + * change a swap pte into a page map), unmap (which may cause + * a page map or a swap pte to disappear), and reclaim (which + * may change a page map into a swap pte). + * + * - Without swapcache, we only want to move the charge if + * there are no other swap ptes. With the pte lock, the + * swapcount is stable against all of the above scenarios + * when it's 1 (our pte), which is the case we care about. + * + * - When there is a page in swapcache, we only want to move + * charges when neither the page nor the swap entry are + * mapped elsewhere. The pte lock prevents our pte from + * being forked or unmapped. The page lock will stop faults + * against, and reclaim of, the swapcache page. So if the + * page isn't mapped, and the swap count is 1 (our pte), the + * test results are stable and the charge is exclusive. + */ + if (!page && __swap_count(ent) != 1) + return NULL; + + if (page) { + if (!trylock_page(page)) { + put_page(page); + return NULL; + } + if (page_mapped(page) || __swap_count(ent) != 1) { + unlock_page(page); + put_page(page); + return NULL; + } + } + entry->val = ent.val; return page;