Re: [PATCH] xfs/013: allow non-write fsstress operations in background workload

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 18, 2014 at 09:55:46AM +1000, Dave Chinner wrote:
> On Tue, Jun 03, 2014 at 02:28:49PM -0400, Brian Foster wrote:
> > It has been reported that test xfs/013 probably uses more space than
> > necessary, exhausting space if run against a several GB sized ramdisk.
> > xfs/013 primarily creates, links and removes inodes. Most of the space
> > consumption occurs via the background fsstress workload.
> > 
> > Remove the fsstress -w option that suppresses non-write operations. This
> > slightly reduces the storage footprint while still providing a
> > background workload for the test.
> > 
> > Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx>
> 
> This change makes the runtime blow out on a ramdisk from 4s to over
> ten minutes on my test machine. Non-ramdisk machines seem to be
> completely unaffected.
> 
> I was going to say "no, bad change", but I noticed that my
> spinning disk VMs weren't affected at all. Looking more closely,
> xfs/013 is now pegging all 16 CPUs on the VM. The profile:
> 
> -  60.73%  [kernel]  [k] do_raw_spin_lock
>    - do_raw_spin_lock
>       - 99.98% _raw_spin_lock
>          - 99.83% sync_inodes_sb
>               sync_inodes_one_sb
>               iterate_supers
>               sys_sync
>               tracesys
>               sync
> -  32.76%  [kernel]  [k] delay_tsc
>    - delay_tsc
>       - 98.43% __delay
>            do_raw_spin_lock
>          - _raw_spin_lock
>             - 99.99% sync_inodes_sb
>                  sync_inodes_one_sb
>                  iterate_supers
>                  sys_sync
>                  tracesys
>                  sync
> 
> OK, that's a kernel problem, not a problem with the change in the
> test...
> 
> /me goes and dusts off his "concurrent sync scalability" patches.

Turns out the reason for this problem suddenly showing up was that I
had another (500TB) XFS filesystem mounted that had several million
clean cached inodes on it from other testing I was doing before the
xfstests run. Even so, having sync go off the deep end when there's
lots of clean cached inodes seems like a Bad Thing to me. :/

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux