On Mon, 6 Jul 2020, Matthew Wilcox wrote: > > @@ -841,6 +842,7 @@ static int __add_to_page_cache_locked(struct page *page, > nr = thp_nr_pages(page); > } > > + VM_BUG_ON_PAGE(xa_load(&mapping->i_pages, offset + nr) == page, page); > page_ref_add(page, nr); > page->mapping = mapping; > page->index = offset; > @@ -880,6 +882,7 @@ static int __add_to_page_cache_locked(struct page *page, > goto error; > } > > + VM_BUG_ON_PAGE(xa_load(&mapping->i_pages, offset + nr) == page, page); > trace_mm_filemap_add_to_page_cache(page); > return 0; > error: > > The second one triggers with generic/051 (running against xfs with the > rest of my patches). So I deduce that we have a shadow entry which > takes up multiple indices, then when we store the page, we now have > a multi-index entry which refers to a single page. And this explains > basically all the accounting problems. I think you are jumping too far ahead by bringing in xfs and your later patches. Don't let me stop you from thinking ahead, but the problem at hand is with tmpfs. tmpfs doesn't use shadow entries, or not where we use the term "shadow" for workingset business: it does save swap entries as xarray values, but I don't suppose the five xfstests I was running get into swap (without applying additional pressure); and (since a huge page gets split before shmem swapout, at present anyway) I don't see a philosophical problem about their multi-index entries. Multi-index shadows, sorry, not a subject I can think about at present. > > Now I'm musing how best to fix this. > > 1. When removing a compound page from the cache, we could store only > a single entry. That seems bad because we cuold hit somewhere else in > the compound page and we'd have the wrong information about workingset > history (or worse believe that a shmem page isn't in swap!) > > 2. When removing a compound page from the cache, we could split the > entry and store the same entry N times. Again, this seems bad for shmem > because then all the swap entries would be the same, and we'd fetch the > wrong data from swap. > > 3 & 4. When adding a page to the cache, we delete any shadow entry which was > previously there, or replicate the shadow entry. Same drawbacks as the > above two, I feel. > > 5. Use the size of the shadow entry to allocate a page of the right size. > We don't currently have an API to find the right size, so that would > need to be written. And what do we do if we can't allocate a page of > sufficient size? > > So that's five ideas with their drawbacks as I see them. Maybe you have > a better idea? >