Re: [PATCH 1/2] writeback: Improve busyloop prevention

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat 22-10-11 12:20:19, Wu Fengguang wrote:
> On Fri, Oct 21, 2011 at 06:26:16AM +0800, Jan Kara wrote:
> > On Thu 20-10-11 21:39:38, Wu Fengguang wrote:
> > > On Thu, Oct 20, 2011 at 08:33:00PM +0800, Wu Fengguang wrote:
> > > > On Thu, Oct 20, 2011 at 08:09:09PM +0800, Wu Fengguang wrote:
> > > > > Jan,
> > > > > 
> > > > > I tried the below combined patch over the ioless one, and find some
> > > > > minor regressions. I studied the thresh=1G/ext3-1dd case in particular
> > > > > and find that nr_writeback and the iostat avgrq-sz drops from time to time.
> > > > > 
> > > > > I'll try to bisect the changeset.
> > > 
> > > This is interesting, the culprit is found to be patch 1, which is
> > > simply
> > >                 if (work->for_kupdate) {
> > >                         oldest_jif = jiffies -
> > >                                 msecs_to_jiffies(dirty_expire_interval * 10);
> > > -                       work->older_than_this = &oldest_jif;
> > > -               }
> > > +               } else if (work->for_background)
> > > +                       oldest_jif = jiffies;
> >   Yeah. I had a look into the trace and you can notice that during the
> > whole dd run, we were running a single background writeback work (you can
> > verify that by work->nr_pages decreasing steadily).
> 
> Yes, it is.
> 
> > Without refreshing
> > oldest_jif, we'd write block device inode for /dev/sda (you can identify
> > that by bdi=8:0, ino=0) only once. When refreshing oldest_jif, we write it
> > every 5 seconds (kjournald dirties the device inode after committing a
> > transaction by dirtying metadata buffers which were just committed and can
> > now be checkpointed either by kjournald or flusher thread).
> 
> OK, now I understand the regular drops of nr_writeback and avgrq-sz:
> on every 5s, it takes _some time_ to write inode 0, during which the
> flusher is blocked and the IO queue runs low.
> 
> > So although the performance is slightly reduced, I'd say that the
> > behavior is a desired one.
> 
> OK. However it's sad to see the flusher get blocked from time to time...
  Well, it doesn't get blocked. It just has to write out inode which cannot
be written out so efficiently. But that's nothing we can really solve...

> > Also if you observed the performance on a really long run, the difference
> > should get smaller because eventually, kjournald has to flush the metadata
> > blocks when the journal fills up and we need to free some journal space and
> > at that point flushing is even more expensive because we have to do a
> > blocking write during which all transaction operations, thus effectively
> > the whole filesystem, are blocked.
> 
> OK. The dd test time was 300s, I'll increase it to 900s (cannot do
> more because it's a 90GB disk partition).
  Yes, that might be an interesting try...

								Honza
-- 
Jan Kara <jack@xxxxxxx>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux