Re: [PATCH v2] writeback: Do not sync data dirtied after sync start

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 26, 2013 at 09:23:58PM +0200, Jan Kara wrote:
> When there are processes heavily creating small files while sync(2) is
> running, it can easily happen that quite some new files are created
> between WB_SYNC_NONE and WB_SYNC_ALL pass of sync(2). That can happen
> especially if there are several busy filesystems (remember that sync
> traverses filesystems sequentially and waits in WB_SYNC_ALL phase on one
> fs before starting it on another fs). Because WB_SYNC_ALL pass is slow
> (e.g. causes a transaction commit and cache flush for each inode in
> ext3), resulting sync(2) times are rather large.

Yup, that can be a problem.

Build warning form the patch:

In file included from include/trace/ftrace.h:575:0,
                 from include/trace/define_trace.h:90,
                 from include/trace/events/writeback.h:603,
                 from fs/fs-writeback.c:89:
include/trace/events/writeback.h: In function ¿ftrace_raw_event_writeback_queue_io¿:
include/trace/events/writeback.h:277:1: warning: initialization makes pointer from integer without a cast [enabled by default]
In file included from include/trace/ftrace.h:711:0,
                 from include/trace/define_trace.h:90,
                 from include/trace/events/writeback.h:603,
                 from fs/fs-writeback.c:89:
include/trace/events/writeback.h: In function ¿perf_trace_writeback_queue_io¿:
include/trace/events/writeback.h:277:1: warning: initialization makes pointer from integer without a cast [enabled by default]

> The following script reproduces the problem:
> 
> function run_writers
> {
>   for (( i = 0; i < 10; i++ )); do
>     mkdir $1/dir$i
>     for (( j = 0; j < 40000; j++ )); do
>       dd if=/dev/zero of=$1/dir$i/$j bs=4k count=4 &>/dev/null
>     done &
>   done
> }
> 
> for dir in "$@"; do
>   run_writers $dir
> done
> 
> sleep 40
> time sync
> ======
> 
> Fix the problem by disregarding inodes dirtied after sync(2) was called
> in the WB_SYNC_ALL pass. To allow for this, sync_inodes_sb() now takes a
> time stamp when sync has started which is used for setting up work for
> flusher threads.
> 
> To give some numbers, when above script is run on two ext4 filesystems on
> simple SATA drive, the average sync time from 10 runs is 267.549 seconds
> with standard deviation 104.799426. With the patched kernel, the average
> sync time from 10 runs is 2.995 seconds with standard deviation 0.096.

Hmmmm. 2.8 seconds on my XFS perf VM without the patch. Ok, try a
smaller VM backed by single spindle of spinning rust rather than
SSDs. Over 10 runs I see:

kernel		min	max	av
vanilla		0.18s	4.46s	1.63s
patched		0.14s	0.45s	0.28s

Definitely an improvement, but nowhere near the numbers you are
seeing for ext4 - maybe XFS isn't as susceptible to this problem
as ext4. Nope, ext4 on an unpatched kernel gives 1.66/6.81/3.12s,
(which is less than your patched kernel results :) but means
so it must be something else configuration/hardware related.

Anyway, the change looks good, it just needs the above warning fixed...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux