Some distributed file systems such as IBM's SANFS, support direct IO to the target storage....without going through a cache... ( This feature is useful, for write only work load....say, we are backing up huge data to an NFS share....). I think if not available, we should add a DIO mount option, that tell the VFS not to cache any data, so that close operation will not stall. With the open-to-close , cache coherence protocol of NFS, an aggressive caching client, is a performance downer for many work-loads that is write-mostly. On Fri, Aug 6, 2010 at 2:26 PM, Jim Rees <rees@xxxxxxxxx> wrote: > Matthew Hodgson wrote: > > Is there any way to tune the linux NFSv3 client to prefer to write > data straight to an async-mounted server, rather than having large > writes to a file stack up in the local pagecache before being synced > on close()? > > It's been a while since I've done this, but I think you can tune this with > vm.dirty_writeback_centisecs and vm.dirty_background_ratio sysctls. The > data will still go through the page cache but you can reduce the amount that > stacks up. > > There are other places where the data can get buffered, like the rpc layer, > but it won't sit there any longer than it takes for it to go out the wire. > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html