Re: [PATCH 1/2] mm: Add memalloc_nowait_{save,restore}

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 14, 2024 at 1:42 PM Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>
> On Wed, Aug 14, 2024 at 10:19:36AM +0800, Yafang Shao wrote:
> > On Wed, Aug 14, 2024 at 8:28 AM Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > >
> > > On Mon, Aug 12, 2024 at 05:05:24PM +0800, Yafang Shao wrote:
> > > > The PF_MEMALLOC_NORECLAIM flag was introduced in commit eab0af905bfc
> > > > ("mm: introduce PF_MEMALLOC_NORECLAIM, PF_MEMALLOC_NOWARN"). To complement
> > > > this, let's add two helper functions, memalloc_nowait_{save,restore}, which
> > > > will be useful in scenarios where we want to avoid waiting for memory
> > > > reclamation.
> > >
> > > Readahead already uses this context:
> > >
> > > static inline gfp_t readahead_gfp_mask(struct address_space *x)
> > > {
> > >         return mapping_gfp_mask(x) | __GFP_NORETRY | __GFP_NOWARN;
> > > }
> > >
> > > and __GFP_NORETRY means minimal direct reclaim should be performed.
> > > Most filesystems already have GFP_NOFS context from
> > > mapping_gfp_mask(), so how much difference does completely avoiding
> > > direct reclaim actually make under memory pressure?
> >
> > Besides the __GFP_NOFS , ~__GFP_DIRECT_RECLAIM also implies
> > __GPF_NOIO. If we don't set __GPF_NOIO, the readahead can wait for IO,
> > right?
>
> There's a *lot* more difference between __GFP_NORETRY and
> __GFP_NOWAIT than just __GFP_NOIO. I don't need you to try to
> describe to me what the differences are; What I'm asking you is this:
>
> > > i.e. doing some direct reclaim without blocking when under memory
> > > pressure might actually give better performance than skipping direct
> > > reclaim and aborting readahead altogether....
> > >
> > > This really, really needs some numbers (both throughput and IO
> > > latency histograms) to go with it because we have no evidence either
> > > way to determine what is the best approach here.
>
> Put simply: does the existing readahead mechanism give better results
> than the proposed one, and if so, why wouldn't we just reenable
> readahead unconditionally instead of making it behave differently
> for this specific case?

Are you suggesting we compare the following change with the current proposal?

diff --git a/include/linux/fs.h b/include/linux/fs.h
index fd34b5755c0b..ced74b1b350d 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3455,7 +3455,6 @@ static inline int kiocb_set_rw_flags(struct
kiocb *ki, rwf_t flags,
        if (flags & RWF_NOWAIT) {
                if (!(ki->ki_filp->f_mode & FMODE_NOWAIT))
                        return -EOPNOTSUPP;
-               kiocb_flags |= IOCB_NOIO;
        }
        if (flags & RWF_ATOMIC) {
                if (rw_type != WRITE)

Doesn't unconditional readahead break the semantics of RWF_NOWAIT,
which is supposed to avoid waiting for I/O? For example, it might
trigger a pageout for a dirty page.

--
Regards

Yafang





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux