Re: Small O_SYNC writes are no longer NFS_DATA_SYNC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 16 Feb 2011 15:50:42 -0500 Jeff Layton <jlayton@xxxxxxxxxx> wrote:

> On Thu, 17 Feb 2011 07:26:18 +1100
> NeilBrown <neilb@xxxxxxx> wrote:
> 
> > On Wed, 16 Feb 2011 08:11:50 -0500 Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> > 
> > > On Wed, 16 Feb 2011 17:15:55 +1100
> > > NeilBrown <neilb@xxxxxxx> wrote:
> > > 
> > > > 
> > > > Hi Trond,
> > > >  I wonder if I might get your help/advice on an issue with NFS.
> > > > 
> > > >  It seems that NFS_DATA_SYNC is hardly used at all currently.  It is used for
> > > >  O_DIRECT writes and for writes 'for_reclaim', and for handling some error
> > > >  conditions, but that is about it.
> > > > 
> > > >  This appears to be a regression.
> > > > 
> > > >  Back in 2005, commit ab0a3dbedc5 in 2.6.13 says:
> > > > 
> > > >     [PATCH] NFS: Write optimization for short files and small O_SYNC writes.
> > > >     
> > > >      Use stable writes if we can see that we are only going to put a single
> > > >      write on the wire.
> > > > 
> > > >  which seems like a sensible optimisation, and we have a customer which
> > > >  values it.  Very roughly, they have an NFS server which optimises 'unstable'
> > > >  writes for throughput and 'stable' writes for latency - these seems like a
> > > >  reasonable approach.
> > > >  With a 2.6.16 kernel an application which generates many small sync writes
> > > >  gets adequate performance.  In 2.6.32 they see unstable writes followed by
> > > >  commits, which cannot be (or at least aren't) optimised as well.
> > > > 
> > > >  It seems this was changed by commit c63c7b0513953
> > > > 
> > > >     NFS: Fix a race when doing NFS write coalescing
> > > >     
> > > >  in 2.6.22.
> > > > 
> > > >  Is it possible/easy/desirable to get this behaviour back.  i.e. to use
> > > >  NFS_DATA_SYNC at least on sub-page writes triggered by a write to an
> > > >  O_SYNC file.
> > > > 
> > > >  My (possibly naive) attempt is as follows.  It appears to work as I expect
> > > >  (though it still uses SYNC for 1-page writes) but I'm not confident that it
> > > >  is "right".
> > > > 
> > > > Thanks,
> > > > 
> > > > NeilBrown
> > > > 
> > > > diff --git a/fs/nfs/write.c b/fs/nfs/write.c
> > > > index 10d648e..392bfa8 100644
> > > > --- a/fs/nfs/write.c
> > > > +++ b/fs/nfs/write.c
> > > > @@ -178,6 +178,9 @@ static int wb_priority(struct writeback_control *wbc)
> > > >  		return FLUSH_HIGHPRI | FLUSH_STABLE;
> > > >  	if (wbc->for_kupdate || wbc->for_background)
> > > >  		return FLUSH_LOWPRI;
> > > > +	if (wbc->sync_mode == WB_SYNC_ALL &&
> > > > +	    (wbc->range_end - wbc->range_start) < PAGE_SIZE)
> > > > +		return FLUSH_STABLE;
> > > >  	return 0;
> > > >  }
> > > >  
> > > 
> > > I'm not so sure about this change. wb_priority is called from
> > > nfs_wb_page. The comments there say:
> > > 
> > > /*
> > >  * Write back all requests on one page - we do this before reading it.
> > >  */
> > > 
> > > ...do we really need those writes to be NFS_FILE_SYNC?
> > 
> > Thanks for taking a look.
> > wb_priority is called from several places - yes.
> > 
> > In the nfs_wb_page case, I think we *do* want NFS_FILE_SYNC.
> > nfs_wb_page calls nfs_writepage_locked and then nfs_commit_inode which calls
> > nfs_scan_commit to send a COMMIT request for the page (if the write wasn't
> > stable).
> > 
> > By using NFS_FILE_SYNC we can avoid that COMMIT, and lose nothing (that I can
> > see).
> > 
> 
> Good point.
> 
> > > 
> > > I think that the difficulty here is determining when we really are
> > > going to just be doing a single write. In that case, then clearly a
> > > FILE_SYNC write is better than an unstable + COMMIT.
> > > 
> > > This is very workload dependent though. It's hard to know beforehand
> > > whether a page that we intend to write will be redirtied soon
> > > afterward. If it is, then FILE_SYNC writes may be worse than letting
> > > the server cache the writes until a COMMIT comes in.
> > > 
> > 
> > The hope is that sync_mode == WB_SYNC_ALL combined with a short 'range' are
> > sufficient.
> > 
> > In particular, WB_SYNC_ALL essentially says that we want this page out to
> > storage *now* so a 'flush' of some sort is likely to following.
> > 
> 
> Also a good point.
> 
> I guess my main worry is that, at least in the Linux NFS server case,
> these get translated to O_SYNC writes which can be very slow. If we
> know though that we're not likely to have follow-on writes that could
> be batched up then this is probably fine.

My aim is to only trigger NFS_DATA_SYNC when the app performs an O_SYNC write,
so having the server do an O_SYNC write in that cases seems correct.

I suspect the heuristic I chose isn' quite that focussed.  I'm hoping someone
can either agree that it is probably focussed enough, or suggest how to
encode a heuristic that really does just get O_SYNC writes...

> 
> I guess I just don't have a good feel for how often (if ever) we do
> multiple writes to the same file with different wbc ranges without an
> intervening commit. If that happens and we end up doing FILE_SYNC
> writes rather than unstable ones, then this change is probably not a
> good one.
> 
> Do you have any idea as to why unstable writes get worse performance
> for this customer? I know that base 2.6.32 did a *lot* more COMMITs
> than were really necessary. Perhaps they are seeing the effect of that?
> Or have they also tested something more recent and seen bad
> performance?

To quote:

------------------------
The Customers email application is doing primarily small sync writes.
we use a clustered filesystem in the backend to store the data exported via
NFS.
if we see the incoming write request with a stable flag set, we optimize the
buffer handling and update in place very efficient. if there is no stable flag
we have to assume there is more data to come and either delay the operation
(speculate there are more writes to the block before we should ackowledge) or
allocate a bigger buffer that needs to be written multiple times (which each
subsequent 4k write) . both solutions are slowing down the application a lot as
you either introduce even more latency or you generate more i/o activity to
your disks as needed (by multiple flush the same filesystem block) .
so if we know the client sends data with with direct i/o or O_SYNC set on the
client application, the data packet sent to the Server should have the stable
flag turned on.
------------------------

So it seems to be the "unstable" write rather than the excess commit that
trigger the unfortunate behaviour.
They have only tried SLES11-SP1, which is 2.6.32 based.


> 
> > 
> > BTW, I'm wondering if the length of 'range' that we test should be related to
> > 'wsize' rather than PAGE_SIZE.  Any thoughts on that?
> > 
> 
> Yeah, that seems like a more meaningful test than page size.
> 


Thanks,
NeilBrown

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux