Re: Splitting a THP beyond EOF

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 27, 2020 at 04:31:26PM +1100, Dave Chinner wrote:
> On Thu, Oct 22, 2020 at 12:04:22AM +0100, Matthew Wilcox wrote:
> > On Thu, Oct 22, 2020 at 09:14:35AM +1100, Dave Chinner wrote:
> > > On Tue, Oct 20, 2020 at 11:53:31PM +0100, Matthew Wilcox wrote:
> > > > True, we don't _have to_ split THP on holepunch/truncation/... but it's
> > > > a better implementation to free pages which cover blocks that no longer
> > > > have data associated with them.
> > > 
> > > "Better" is a very subjective measure. What numbers do you have
> > > to back that up?
> > 
> > None.  When we choose to use a THP, we're choosing to treat a chunk
> > of a file as a single unit for the purposes of tracking dirtiness,
> > age, membership of the workingset, etc.  We're trading off reduced
> > precision for reduced overhead; just like the CPU tracks dirtiness on
> > a cacheline basis instead of at byte level.
> > 
> > So at some level, we've making the assumption that this 128kB THP is
> > all one thingand it should be tracked together.  But the user has just
> > punched a hole in it.  I can think of no stronger signal to say "The
> > piece before this hole, the piece I just got rid of and the piece after
> > this are three separate pieces of the file".
> 
> There's a difference between the physical layout of the file and
> representing data efficiently in the page cache. Just because we can
> use a THP to represent a single extent doesn't mean we should always
> use that relationship, nor should we require that small
> manipulations of on-disk extent state require that page cache pages
> be split or gathered.
> 
> i.e. the whole point of the page cache is to decouple the physical
> layout of the file from the user access mechanisms for performance
> reasons, not tie them tightly together. I think that's the wrong
> approach to be taking here - truncate/holepunch do not imply that
> THPs need to be split unconditionally. Indeed, readahead doesn't
> care that a THP might be split across mulitple extents and require
> multiple bios to bring tha data into cache, so why should
> truncate/holepunch type operations require the THP to be split to
> reflect underlying disk layouts?

At the time we do readahead, we've inferred from the user's access
patterns that they're reading this file if not sequentially, then close
enough to sequentially that it makes sense to bring in more of the file.
On-media layout of the file is irrelevant, as you say.

Now the user has given us another hint about how they see the file.
A call to FALLOC_FL_PUNCH_HOLE is certainly an instruction to the
filesystem to change the layout, but it's also giving the page cache
information about how the file is being treated.  It tells us that
the portion of the file before the hole is different from the portion
of the file after the hole, and treating those two portions of the
file as being similar for the purposes of working set tracking is
going to lead to wrong decisions.

Let's take an example where an app uses 1kB fixed size records.  First it
does a linear scan (so readahead kicks in and we get all the way up to
allocating 256kB pages).  Then it decides some records are obsolete, so it
calls PUNCH_HOLE on the range 20kB to 27kB in the page, then PUNCH_HOLE
40kB-45kB and finally PUNCH_HOLE 150kB-160kB.  In my current scheme,
this splits the page into 4kB pages.  If the app then only operates on
the records after 160kB and before 20kB, the pages used to cache records
in the 24kB-40kB and 44kB-150kB ranges will naturally fall out of cache
and the memory will be used for other purposes.  With your scheme,
the 256kB page would be retained in cache as a single piece.

> > If I could split them into pieces that weren't single pages, I would.
> > Zi Yan has a patch to do just that, and I'm very much looking forward
> > to that being merged.  But saying "Oh, this is quite small, I'll keep
> > the rest of the THP together" is conceptually wrong.
> 
> Yet that's exactly what we do with block size < PAGE_SIZE
> configurations, so I fail to see why it's conceptually wrong for
> THPs to behave the same way and normal pages....

We don't have the ability to mmap files at smaller than PAGE_SIZE
granularity, so we can't do that.

> > I'm not saying that my patchset is the last word and there will be no
> > tweaking.  I'm saying I think it's good enough, an improvement on the
> > status quo, and it's better to merge it for 5.11 than to keep it out of
> > tree for another three months while we tinker with improving it.
> > 
> > Do you disagree?
> 
> In part. Concepts and algorithms need to be sound and agreed upon
> before we merge patches, and right now I disagree with the some of
> the basic assumptions about how THP and filesystem layout operations
> are being coupled. That part needs to be sorted before stuff gets
> merged...

They're not being coupled.  I'm using the information the user is
giving the kernel to make better decisions about what to cache.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux