Re: mm: delay rmap removal until after TLB flush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 31, 2022 at 11:43:30AM -0700, Linus Torvalds wrote:
[...]
> If people really want to see the patches in email again, I can do
> that, but most of you already have, and the changes are either trivial
> fixes or the s390 updates.
> 
> For the s390 people that I've now added to the participant list maybe
> the git tree is fine - and the fundamental explanation of the problem
> is in that top-most commit (with the three preceding commits being
> prep-work). Or that link to the thread about this all.

I rather have a question to the generic part (had to master the code quotting).

> static void clean_and_free_pages_and_swap_cache(struct encoded_page **pages, unsigned int nr)
> {
> 	for (unsigned int i = 0; i < nr; i++) {
> 		struct encoded_page *encoded = pages[i];
> 		unsigned int flags = encoded_page_flags(encoded);
> 		if (flags) {
> 			/* Clean the flagged pointer in-place */
> 			struct page *page = encoded_page_ptr(encoded);
> 			pages[i] = encode_page(page, 0);
> 
> 			/* The flag bit being set means that we should zap the rmap */

Why TLB_ZAP_RMAP bit is not checked explicitly here, like in s390 version?
(I assume, when/if ENCODE_PAGE_BITS is not TLB_ZAP_RMAP only, calling
page_zap_pte_rmap() without such a check would be a bug).

> 			page_zap_pte_rmap(page);
> 			VM_WARN_ON_ONCE_PAGE(page_mapcount(page) < 0, page);
> 		}
> 	}
> 
> 	/*
> 	 * Now all entries have been un-encoded, and changed to plain
> 	 * page pointers, so we can cast the 'encoded_page' array to
> 	 * a plain page array and free them
> 	 */
> 	free_pages_and_swap_cache((struct page **)pages, nr);
> }

Thanks!




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux