Re: [PATCH v3 3/3] NFSD: Add support for encoding multiple segments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 16, 2015 at 08:50:02AM +1000, Dave Chinner wrote:
> On Wed, Apr 15, 2015 at 04:00:16PM -0400, J. Bruce Fields wrote:
> > On Wed, Apr 15, 2015 at 03:56:14PM -0400, J. Bruce Fields wrote:
> > > On Wed, Apr 15, 2015 at 03:32:02PM -0400, Anna Schumaker wrote:
> > > > I just ran some more tests comparing the directio case across
> > > > different filesystem types.  These tests used three 1G files:  100%
> > > > data, 100% hole, and mixed file with alternating 4k data and hole
> > > > segments.  The mixed case seems to be consistently slower compared to
> > > > NFS v4.1, and I'm at a loss for anything I could do to make it faster.
> > > > Here are my numbers:
> > > 
> > > Have you tried the implementation we discussed that always returns a
> > > single segment covering the whole requested range, by treating holes as
> > > data if necessary when they don't cover the whole range?

Uh, sorry, I forgot, I think you're running with the patches that
support full multi-segment READ_PLUS on both sides so there's not that
issue with multiplying RPC's in this case.

Still, might be interesting to compare.  And wouldn't hurt to remind us
of these details when you repost this stuff to help keep my forgetful
self going in circles.

> > > (Also: I assume it's the same as before, but: when you post test
> > > results, could you repost if necessary:
> > > 
> > > 	- what the actual test is
> > > 	- what the hardware/software setup is on client and server
> > > 
> > > so that we have reproduceable results for posterity's sake.)
> > > 
> > > Interesting that "Mixed" is a little slower even before READ_PLUS.
> > > 
> > > And I guess we should really report this to ext4 people, looks like they
> > > may have a bug.
> > 
> > FWIW, this is what I was using to test SEEK_HOLE/SEEK_DATA and map out
> > holes on files on my local disk.  Might be worth checking whether the
> > ext4 slowdowns are reproduceable just with something like this, to rule
> > out protocol problems.
> 
> Wheel reinvention. :)

xfs_io appears to have a lot of wheels.  OK, I'll go read that man page
one of these days.

--b.

> 
> $ rm -f /mnt/scratch/bar
> $ for i in `seq 20 -2 0`; do
> > sudo xfs_io -f -c "pwrite $((i * 8192)) 4096" /mnt/scratch/bar
> > done
> .....
> $ sync
> $ sudo xfs_io -c "seek -ar 0" /mnt/scratch/bar
> Whence  Result
> DATA    0
> HOLE    4096
> DATA    16384
> HOLE    20480
> DATA    32768
> HOLE    36864
> DATA    49152
> HOLE    53248
> DATA    65536
> HOLE    69632
> DATA    81920
> HOLE    86016
> DATA    98304
> HOLE    102400
> DATA    114688
> HOLE    118784
> DATA    131072
> HOLE    135168
> DATA    147456
> HOLE    151552
> DATA    163840
> HOLE    167936
> $
> 
> -Dave.
> -- 
> Dave Chinner
> david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux