Re: buffered writeback torture program

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Excerpts from Chris Mason's message of 2011-04-21 07:09:11 -0400:
> Excerpts from Vivek Goyal's message of 2011-04-20 18:06:26 -0400:
> > > 
> > > In this case the 128s spent in write was on a single 4K overwrite on a
> > > 4K file.
> > 
> > Chris, You seem to be doing 1MB (32768*32) writes on fsync file instead of 4K.
> > I changed the size to 4K still not much difference though.
> 
> Whoops, I had that change made locally but didn't get it copied out.
> 
> > 
> > Once the program has exited because of high write time, i restarted it and
> > this time I don't see high write times.
> 
> I see this for some of my runs as well.
> 
> > 
> > First run
> > ---------
> > # ./a.out 
> > setting up random write file
> > done setting up random write file
> > starting fsync run
> > starting random io!
> > write time: 0.0006s fsync time: 0.3400s
> > write time: 63.3270s fsync time: 0.3760s
> > run done 2 fsyncs total, killing random writer
> > 
> > Second run
> > ----------
> > # ./a.out 
> > starting fsync run
> > starting random io!
> > write time: 0.0006s fsync time: 0.5359s
> > write time: 0.0007s fsync time: 0.3559s
> > write time: 0.0009s fsync time: 0.3113s
> > write time: 0.0008s fsync time: 0.4336s
> > write time: 0.0009s fsync time: 0.3780s
> > write time: 0.0008s fsync time: 0.3114s
> > write time: 0.0009s fsync time: 0.3225s
> > write time: 0.0009s fsync time: 0.3891s
> > write time: 0.0009s fsync time: 0.4336s
> > write time: 0.0009s fsync time: 0.4225s
> > write time: 0.0009s fsync time: 0.4114s
> > write time: 0.0007s fsync time: 0.4004s
> > 
> > Not sure why would that happen.
> > 
> > I am wondering why pwrite/fsync process was throttled. It did not have any
> > pages in page cache and it shouldn't have hit the task dirty limits. Does that
> > mean per task dirty limit logic does not work or I am completely missing
> > the root cause of the problem.
> 
> I haven't traced it to see.  This test box only has 1GB of ram, so the
> dirty ratios can be very tight.

Oh, I see now.  The test program first creates the file with a big
streaming write.  So the task doing the streaming writes gets nailed
with the per-task dirty accounting because it is making a ton of dirty
data.

Then the task forks the random writer to do all the random IO.

Then the original pid goes back to do the fsyncs on the new file.

So, in the original run, we get stuffed into balance_dirty_pages because
the per-task limits show we've done a lot of dirties.

In all later runs, the file already exists, so our fsyncing process
hasn't done much dirtying at all.  Looks like the VM is doing something
sane, we just get nailed with big random IO.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux