On Sun, Oct 18, 2015 at 8:27 PM, Yan, Zheng <ukernel@xxxxxxxxx> wrote: > On Sat, Oct 17, 2015 at 1:42 AM, Burkhard Linke > <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote: >> Hi, >> >> I've noticed that CephFS (both ceph-fuse and kernel client in version 4.2.3) >> remove files from page cache as soon as they are not in use by a process >> anymore. >> >> Is this intended behaviour? We use CephFS as a replacement for NFS in our >> HPC cluster. It should serve large files which are read by multiple jobs on >> multiple hosts, so keeping them in the page cache over the duration of >> several job invocations is crucial. > > Yes. MDS needs resource to track the cached data. We don't want MDS > use too much resource. So if I'm reading things right, the code to drop the page cache for ceph-fuse was added in https://github.com/ceph/ceph/pull/1594 (specifically 82015e409d09701a7048848f1d4379e51dd00892). I don't think it's actually needed for the cap trimming stuff or to prevent MDS cache pressure and it's actually not clear to me why it was added here anyway. But you do say the PR as a whole fixed a lot of bugs. Do you know if the page cache clearing was for any bugs in particular, Zheng? In general I think proactively clearing the page cache is something we really only want to do as part of our consistency and cap handling story, and file closes don't really play into that. I've pushed a TOTALLY UNTESTED (NOT EVEN COMPILED) branch client-pagecache-norevoke based on master to the gitbuilders. If it does succeed in building you should be able to download it and you can use it for testing, or cherry-pick the top commit out of git and build your own packages. Then set the (new to this branch) client_preserve_pagecache config option to true (default: false) and it should avoid flushing the page cache. But there might be (probably are?) bugs as a result of that. No idea. Use at your own risk. But let us know if it makes things better for you. -Greg _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com