Hi brian Thank you very much for your info. I have already tried io-cache it helps a little but it can't compete with page cache. BTW what's the effort if I want to back port fopen keep cache to 3.3? ???? iPhone ? 2013-1-8?23:21?Brian Foster <bfoster at redhat.com> ??? > On 01/08/2013 09:49 AM, ??? wrote: >> Dear gluster experts, >> >> I search through glusterfs 3.3 source tree and can't find any fuse >> open option FOPEN_KEEP_CACHE related code. Does this mean that >> glusterfs 3.3 doen't support fuse keep cache feature? However I did >> find keep cache code in mainline. My question is: > > It appears that keep cache support is not included in the 3.3 release. > You can build from the latest git repo to try it: > > 32ffb79f fuse/md-cache: add support for the 'fopen-keep-cache' mount option > >> 1 does glusterfs 3.3 support page cache? > > Page cache interaction is primarily a function of fuse. The current > behavior is that writes are synchronous to the fuse filesystem, reads > are potentially served from page cache, and file open operations > invalidate the entire mapping for a file. > > The FOPEN_KEEP_CACHE fuse init option disables the latter behavior. In > turn, gluster detects whether changes have occurred in the file remotely > and nudges fuse to invalidate the file cache on-demand. There is also > upstream fuse work in progress to support writeback cache behavior[1]. > >> 2 if not what is the best practice to improve performance if a file is >> frequently accessed by read especially through random access fops. > > I would suggest to experiment with the tunables in the io-cache > translator (xlators/performance/io-cache/src/io-cache.c), such as > priority, cache-timeout and cache-size. > >> 3 does glusterfs 3.4 support page cache, if supported how to enable it >> through mount option? > > I would hope that it will at least include the keep cache support. Use > the fopen-keep-cache mount option on latest gluster to try it out. > > Brian > > [1] - http://comments.gmane.org/gmane.comp.file-systems.fuse.devel/12266 > >> Thank you very much. >