Re: [PATCH 00/14] Small step toward KSM for file back page.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 08, 2020 at 04:43:41PM +0100, Matthew Wilcox wrote:
> On Thu, Oct 08, 2020 at 11:30:28AM -0400, Jerome Glisse wrote:
> > On Wed, Oct 07, 2020 at 11:09:16PM +0100, Matthew Wilcox wrote:
> > > So ... why don't you put a PageKsm page in the page cache?  That way you
> > > can share code with the current KSM implementation.  You'd need
> > > something like this:
> > 
> > I do just that but there is no need to change anything in page cache.
> 
> That's clearly untrue.  If you just put a PageKsm page in the page
> cache today, here's what will happen on a truncate:
> 
> void truncate_inode_pages_range(struct address_space *mapping,
>                                 loff_t lstart, loff_t lend)
> {
> ...
>                 struct page *page = find_lock_page(mapping, start - 1);
> 
> find_lock_page() does this:
>         return pagecache_get_page(mapping, offset, FGP_LOCK, 0);
> 
> pagecache_get_page():
> 
> repeat:
>         page = find_get_entry(mapping, index);
> ...
>         if (fgp_flags & FGP_LOCK) {
> ...
>                 if (unlikely(compound_head(page)->mapping != mapping)) {
>                         unlock_page(page);
>                         put_page(page);
>                         goto repeat;
> 
> so it's just going to spin.  There are plenty of other codepaths that
> would need to be checked.  If you haven't found them, that shows you
> don't understand the problem deeply enough yet.

I also change truncate, splice and few other special cases that do
not goes through GUP/page fault/mkwrite (memory debug too but that's
a different beast).


> I believe we should solve this problem, but I don't think you're going
> about it the right way.

I have done much more than what i posted but there is bug that i
need to hammer down before posting everything and i wanted to get
the discussion started. I guess i will finish tracking that one
down and post the whole thing.


> > So flow is:
> > 
> >   Same as before:
> >     1 - write fault (address, vma)
> >     2 - regular write fault handler -> find page in page cache
> > 
> >   New to common page fault code:
> >     3 - ksm check in write fault common code (same as ksm today
> >         for anonymous page fault code path).
> >     4 - break ksm (address, vma) -> (file offset, mapping)
> >         4.a - use mapping and file offset to lookup the proper
> >               fs specific information that were save when the
> >               page was made ksm.
> >         4.b - allocate new page and initialize it with that
> >               information (and page content), update page cache
> >               and mappings ie all the pte who where pointing to
> >               the ksm for that mapping at that offset to now use
> >               the new page (like KSM for anonymous page today).
> 
> But by putting that logic in the page fault path, you've missed
> the truncate path.  And maybe other places.  Putting the logic
> down in pagecache_get_page() means you _don't_ need to find
> all the places that call pagecache_get_page().

They are cases where pagecache is not even in the loop ie you
already have the page and you do not need to look it up (page
fault, some fs common code, anything that goes through GUP,
memory reclaim, ...). Making all those places having to go
through page cache all the times will slow them down and many
are hot code path that i do not believe we want to slow even
if a feature is not use.

Cheers,
Jérôme




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux