Hi, Jim Rees wrote:
Matthew Hodgson wrote: Is there any way to tune the linux NFSv3 client to prefer to write data straight to an async-mounted server, rather than having large writes to a file stack up in the local pagecache before being synced on close()? It's been a while since I've done this, but I think you can tune this with vm.dirty_writeback_centisecs and vm.dirty_background_ratio sysctls. The data will still go through the page cache but you can reduce the amount that stacks up.
Yup, that does the trick - I'd tried this earlier, but hadn't gone far enough - seemingly I need to drop vm.dirty_writeback_centisecs down to 1 (and vm.dirty_background_ratio to 1) for the back-pressure to propagate correctly for this use case. Thanks for the pointer!
In other news, whilst saturating the ~10Mb/s pipe during the big write to the server, I'm seeing huge delays of >10 seconds on trying to do trivial operations such as ls'ing small directories. Is this normal, or is there some kind of tunable scheduling on the client to avoid a single big transfer wedging the machine?
thanks, Matthew -- Matthew Hodgson Development Program Manager OpenMarket | www.openmarket.com/europe matthew.hodgson@xxxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html