Hi all,
Is there any way to tune the linux NFSv3 client to prefer to write data
straight to an async-mounted server, rather than having large writes to
a file stack up in the local pagecache before being synced on close()?
I have an application which (stupidly) expects system calls to return
fairly rapidly, otherwise an application-layer timeout occurs. If I
write (say) 100MB of data to an NFS share with the app, the write()s
return almost immediately as the local pagecache is filled up - but then
close() blocks for several minutes as the data is synced to the server
over a slowish link. Mounting the share as -o sync fixes this, as does
opening the file O_SYNC or O_DIRECT - but ideally I want to generally
encourage the client to flush a bit more aggressively to the server
without the performance hit of making every write explicitly synchronous.
Is there a way to cap the size of pagecache that the NFS client uses?
This is currently on a 2.6.18 kernel (Centos 5.5), although I'm more
than happy to use something less prehistoric if that's what it takes.
M.
--
Matthew Hodgson
Development Program Manager
OpenMarket | www.openmarket.com/europe
matthew.hodgson@xxxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html