Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here’s one more thing I noticed: when polling
/sys/kernel/debug/bdi/0:93/stats, I see that BdiDirtied and BdiWritten
remain at their original values while the kernel sends FUSE read
requests, and only goes up when the kernel transitions into sending
FUSE write requests. Notably, the page dirtying throttling happens in
the read phase, which is most likely why the write bandwidth is
(correctly) measured as 0.

Do we have any ideas on why the kernel sends FUSE reads at all?

On Thu, Mar 5, 2020 at 3:45 PM Michael Stapelberg
<michael+lkml@xxxxxxxxxxxxx> wrote:
>
> Thanks for taking a look!
>
> Find attached a trace file which illustrates that the device’s write
> bandwidth (write_bw) decreases from the initial 100 MB/s down to,
> eventually, 0 (not included in the trace). When seeing the
> pathologically slow write-back performance, I observed write_bw=0!
>
> The trace was generated with these tracepoints enabled:
> echo 1 > /sys/kernel/debug/tracing/events/writeback/balance_dirty_pages/enable
> echo 1 > /sys/kernel/debug/tracing/events/writeback/bdi_dirty_ratelimit/enable
>
> I wonder why the measured write bandwidth decreases so much. Any thoughts?
>
> On Tue, Mar 3, 2020 at 3:25 PM Tejun Heo <tj@xxxxxxxxxx> wrote:
> >
> > On Tue, Mar 03, 2020 at 03:21:47PM +0100, Michael Stapelberg wrote:
> > > Find attached trace.log (cat /sys/kernel/debug/tracing/trace) and
> > > fuse-debug.log (FUSE daemon with timestamps).
> > >
> > > Does that tell you something, or do we need more data? (If so, how?)
> >
> > This is likely the culprit.
> >
> >  .... 1319822.406198: balance_dirty_pages: ... bdi_dirty=68 dirty_ratelimit=28 ...
> >
> > For whatever reason, bdp calculated that the dirty throttling
> > threshold for the fuse device is 28 pages which is extremely low. Need
> > to track down how that number came to be. I'm afraid from here on it'd
> > mostly be reading source code and sprinkling printks around but the
> > debugging really comes down to figuring out how we ended up with 68
> > and 28.
> >
> > Thanks.
> >
> > --
> > tejun




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux