Re: [HELP] FUSE writeback performance bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 5, 2024 at 1:17 AM Josef Bacik <josef@xxxxxxxxxxxxxx> wrote:
>
> On Tue, Jun 04, 2024 at 11:39:17PM +0200, Bernd Schubert wrote:
> >
> >
> > On 6/4/24 18:53, Josef Bacik wrote:
> > > On Tue, Jun 04, 2024 at 04:13:25PM +0200, Bernd Schubert wrote:
> > >>
> > >>
> > >> On 6/4/24 12:02, Miklos Szeredi wrote:
> > >>> On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@xxxxxxxxxxx> wrote:
> > >>>
> > >>>> Back to the background for the copy, so it copies pages to avoid
> > >>>> blocking on memory reclaim. With that allocation it in fact increases
> > >>>> memory pressure even more. Isn't the right solution to mark those pages
> > >>>> as not reclaimable and to avoid blocking on it? Which is what the tmp
> > >>>> pages do, just not in beautiful way.
> > >>>
> > >>> Copying to the tmp page is the same as marking the pages as
> > >>> non-reclaimable and non-syncable.
> > >>>
> > >>> Conceptually it would be nice to only copy when there's something
> > >>> actually waiting for writeback on the page.
> > >>>
> > >>> Note: normally the WRITE request would be copied to userspace along
> > >>> with the contents of the pages very soon after starting writeback.
> > >>> After this the contents of the page no longer matter, and we can just
> > >>> clear writeback without doing the copy.
> > >>>
> > >>> But if the request gets stuck in the input queue before being copied
> > >>> to userspace, then deadlock can still happen if the server blocks on
> > >>> direct reclaim and won't continue with processing the queue.   And
> > >>> sync(2) will also block in that case.>
> > >>> So we'd somehow need to handle stuck WRITE requests.   I don't see an
> > >>> easy way to do this "on demand", when something actually starts
> > >>> waiting on PG_writeback.  Alternatively the page copy could be done
> > >>> after a timeout, which is ugly, but much easier to implement.
> > >>
> > >> I think the timeout method would only work if we have already allocated
> > >> the pages, under memory pressure page allocation might not work well.
> > >> But then this still seems to be a workaround, because we don't take any
> > >> less memory with these copied pages.
> > >> I'm going to look into mm/ if there isn't a better solution.
> > >
> > > I've thought a bit about this, and I still don't have a good solution, so I'm
> > > going to throw out my random thoughts and see if it helps us get to a good spot.
> > >
> > > 1. Generally we are moving away from GFP_NOFS/GFP_NOIO to instead use
> > >    memalloc_*_save/memalloc_*_restore, so instead the process is marked being in
> > >    these contexts.  We could do something similar for FUSE, tho this gets hairy
> > >    with things that async off request handling to other threads (which is all of
> > >    the FUSE file systems we have internally).  We'd need to have some way to
> > >    apply this to an entire process group, but this could be a workable solution.
> > >
> >
> > I'm not sure how either of of both (GFP_ and memalloc_) would work for
> > userspace allocations.
> > Wouldn't we basically need to have a feature to disable memory
> > allocations for fuse userspace tasks? Hmm, maybe through mem_cgroup.
> > Although even then, the file system might depend on other kernel
> > resources (backend file system or block device or even network) that
> > might do allocations on their own without the knowledge of the fuse server.
> >
>
> Basically that only in the case that we're handling a request from memory
> pressure we would invoke this, and then any allocation would automatically have
> gfp_nofs protection because it's flagged at the task level.
>
> Again there's a lot of problems with this, like how do we set it for the task,
> how does it work for threads etc.
>
> > > 2. Per-request timeouts.  This is something we're planning on tackling for other
> > >    reasons, but it could fit nicely here to say "if this fuse fs has a
> > >    per-request timeout, skip the copy".  That way we at least know we're upper
> > >    bound on how long we would be "deadlocked".  I don't love this approach
> > >    because it's still a deadlock until the timeout elapsed, but it's an idea.
> >
> > Hmm, how do we know "this fuse fs has a per-request timeout"? I don't
> > think we could trust initialization flags set by userspace.
> >
>
> It would be controlled by the kernel.  So at init time the fuse file system says
> "my command timeout is 30 minutes."  Then the kernel enforces this by having a
> per-request timeout, and once that 30 minutes elapses we cancel the request and
> EIO it.  User space doesn't do anything beyond telling the kernel what it's
> timeout is, so this would be safe.
>

Maybe that would be better to configure by mounter, similar to nfs -otimeo
and maybe consider opt-in to returning ETIMEDOUT in this case.
At least nfsd will pass that error to nfs client and nfs client will retry.

Different applications (or network protocols) handle timeouts differently,
so the timeout and error seems like a decision for the admin/mounter not
for the fuse server, although there may be a fuse fs that would want to
set the default timeout, as if to request the kernel to be its watchdog
(i.e. do not expect me to take more than 30 min to handle any request).

Thanks,
Amir.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux