Re: [PATCH v2 2/4] mm/vmalloc: add support for __GFP_NOFAIL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 25, 2021 at 9:46 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
>
> On Wed 24-11-21 21:11:42, Uladzislau Rezki wrote:
> > On Tue, Nov 23, 2021 at 05:02:38PM -0800, Andrew Morton wrote:
> > > On Tue, 23 Nov 2021 20:01:50 +0100 Uladzislau Rezki <urezki@xxxxxxxxx> wrote:
> > >
> > > > On Mon, Nov 22, 2021 at 04:32:31PM +0100, Michal Hocko wrote:
> > > > > From: Michal Hocko <mhocko@xxxxxxxx>
> > > > >
> > > > > Dave Chinner has mentioned that some of the xfs code would benefit from
> > > > > kvmalloc support for __GFP_NOFAIL because they have allocations that
> > > > > cannot fail and they do not fit into a single page.
> > >
> > > Perhaps we should tell xfs "no, do it internally".  Because this is a
> > > rather nasty-looking thing - do we want to encourage other callsites to
> > > start using it?
> > >
> > > > > The large part of the vmalloc implementation already complies with the
> > > > > given gfp flags so there is no work for those to be done. The area
> > > > > and page table allocations are an exception to that. Implement a retry
> > > > > loop for those.
> > > > >
> > > > > Add a short sleep before retrying. 1 jiffy is a completely random
> > > > > timeout. Ideally the retry would wait for an explicit event - e.g.
> > > > > a change to the vmalloc space change if the failure was caused by
> > > > > the space fragmentation or depletion. But there are multiple different
> > > > > reasons to retry and this could become much more complex. Keep the retry
> > > > > simple for now and just sleep to prevent from hogging CPUs.
> > > > >
> > >
> > > Yes, the horse has already bolted.  But we didn't want that horse anyway ;)
> > >
> > > I added GFP_NOFAIL back in the mesozoic era because quite a lot of
> > > sites were doing open-coded try-forever loops.  I thought "hey, they
> > > shouldn't be doing that in the first place, but let's at least
> > > centralize the concept to reduce code size, code duplication and so
> > > it's something we can now grep for".  But longer term, all GFP_NOFAIL
> > > sites should be reworked to no longer need to do the retry-forever
> > > thing.  In retrospect, this bright idea of mine seems to have added
> > > license for more sites to use retry-forever.  Sigh.
> > >
> > > > > +               if (nofail) {
> > > > > +                       schedule_timeout_uninterruptible(1);
> > > > > +                       goto again;
> > > > > +               }
> > >
> > > The idea behind congestion_wait() is to prevent us from having to
> > > hard-wire delays like this.  congestion_wait(1) would sleep for up to
> > > one millisecond, but will return earlier if reclaim events happened
> > > which make it likely that the caller can now proceed with the
> > > allocation event, successfully.
> > >
> > > However it turns out that congestion_wait() was quietly broken at the
> > > block level some time ago.  We could perhaps resurrect the concept at
> > > another level - say by releasing congestion_wait() callers if an amount
> > > of memory newly becomes allocatable.  This obviously asks for inclusion
> > > of zone/node/etc info from the congestion_wait() caller.  But that's
> > > just an optimization - if the newly-available memory isn't useful to
> > > the congestion_wait() caller, they just fail the allocation attempts
> > > and wait again.
> > >
> > > > well that is sad...
> > > > I have raised two concerns in our previous discussion about this change,
> > >
> > > Can you please reiterate those concerns here?
> > >
> > 1. I proposed to repeat(if fails) in one solid place, i.e. get rid of
> > duplication and spreading the logic across several places. This is about
> > simplification.
>
> I am all for simplifications. But the presented simplification lead to 2) and ...
>
> > 2. Second one is about to do an unwinding and release everything what we
> > have just accumulated in terms of memory consumption. The failure might
> > occur, if so a condition we are in is a low memory one or high memory
> > pressure. In this case, since we are about to sleep some milliseconds
> > in order to repeat later, IMHO it makes sense to release memory:
> >
> > - to prevent killing apps or possible OOM;
> > - we can end up looping quite a lot of time or even forever if users do
> >   nasty things with vmalloc API and __GFP_NOFAIL flag.
>
> ... this is where we disagree and I have tried to explain why. The primary
> memory to allocate are pages to back the vmalloc area. Failing to
> allocate few page tables - which btw. do not fail as they are order-0 -
> and result into the whole and much more expensive work to allocate the
> former is really wasteful. You've had a concern about OOM killer
> invocation while retrying the page table allocation but you should
> realize that page table allocations might already invoke OOM killer so that
> is absolutely nothing new.
>
We are in a slow path and this is a corner case, it means we will
timeout for many
milliseconds, for example for CONFIG_HZ_100 it is 10 milliseconds. I would agree
with you if it was requesting some memory and repeating in a tight loop because
of any time constraint and workloads sensitive to latency.

Is it sensitive to any workload? If so, we definitely can not go with
any delay there.

As for OOM, right you are. But it also can be that we are the source
who triggers it,
directly or indirectly. Unwinding and cleaning is a maximum what we
actually can do
here staying fair to OOM.

Therefore i root for simplification and OOM related concerns :) But
maybe there will
be other opinions.

Thanks!

--
Uladzislau Rezki



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux