On Wed, Aug 19, 2020 at 07:48:48PM +0100, Matthew Wilcox (Oracle) wrote: > There are only three callers remaining of find_get_entry(). > find_get_swap_page() is happy to get the head page instead of the subpage. > Add find_subpage() calls to find_lock_entry() and pagecache_get_page() > to avoid auditing all their callers. I believe this would cause a subtle bug in memcg charge moving for pte mapped huge pages. We currently skip over tail pages in the range (they don't have page->mem_cgroup set) and account for the huge page once from the headpage. After this change, we would see the headpage and account for it 512 times (or whatever the number is on non-x86). But that aside, I don't quite understand the intent. Before, all these functions simply return the base page at @index, whether it's a regular page or a tail page. Afterwards, find_lock_entry(), find_get_page() et al still do, but find_get_entry() returns headpage at @index & HPAGE_CACHE_INDEX_MASK. Shouldn't we be consistent about how we handle huge pages when somebody queries the tree for a given base page index? [ Wouldn't that mean that e.g. find_get_swap_page() would return tail pages for regular files and head pages for shmem files? ]