On Thu, Jul 25, 2019 at 3:08 AM Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx> wrote: > > The rsync job has been copying quite happily for two hours now. The good > news is that the cache size isn't increasing unboundedly with each > request anymore. The bad news is that it still is increasing afterall, > though much slower. I am at 3M inodes now and it started off with 900k, > settling at 1M initially. I had a peak just now of 3.7M, but it went > back down to 3.2M shortly after that. > > According to the health status, the client has started failing to > respond to cache pressure, so it's still not working as reliably as I > would like it to. I am also getting this very peculiar message: > > MDS cache is too large (7GB/19GB); 52686 inodes in use by clients > > I guess the 53k inodes is the number that is actively in use right now > (compared to the 3M for which the client generally holds caps). Is that > so? Cache memory is still well within bounds, however. Perhaps the > message is triggered by the recall settings and just a bit misleading? Based on that message, it would appear you still have an inode limit in place ("mds_cache_size"). Please unset that config option. Your mds_cache_memory_limit is apparently ~19GB. There is another limit mds_max_caps_per_client (default 1M) which the client is hitting. That's why the MDS is recalling caps from the client and not because any cache memory limit is hit. It is not recommend you increase this. -- Patrick Donnelly, Ph.D. He / Him / His Senior Software Engineer Red Hat Sunnyvale, CA GPG: 19F28A586F808C2402351B93C3301A3E258DD79D _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com