Hi Sean,
Isn't there any downsides to increasing the mds cache size?On Thu, Jun 9, 2016 at 12:41 PM, Sean Crosby <richardnixonshead@xxxxxxxxx> wrote:
Hi Elias,When we have received the same warning, our solution has been to increase the inode cache on the MDS.We have addedmds cache size = 2000000to the [global] section of ceph.conf on the MDS server. We have to restart MDS for the changes to be applied.SeanOn 9 June 2016 at 19:55, Elias Abacioglu <elias.abacioglu@xxxxxxxxxxxxxxxxx> wrote:_______________________________________________EliasThanks,Ceph 10.2.1Hi,In previous Ceph versions this might have been a bug. But now we are running Jewel.
I know this have been asked here a couple of times, but couldn't find anything concrete.
I have the following warning in our ceph cluster.
mds0: Client web01:cephfs.web01 failing to respond to cache pressureSo is there a way to fix this warning?Do I need to tune some values? Boost the cluster? Boost the client?Here are some details:Client kernel is 4.4.0.
# ceph mds dump
dumped fsmap epoch 5755
fs_name cephfs
epoch 5755
flags 0
created 2015-12-03 11:21:28.128193
modified 2016-05-16 06:48:47.969430
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 4900
last_failure_osd_epoch 5884
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}
max_mds 1
in 0
up {0=574261}
failed
damaged
stopped
data_pools 2
metadata_pool 3
inline_data disabled
574261: 10.3.215.5:6801/62035 'ceph-mds03' mds.0.5609 up:active seq 515014
594257: 10.3.215.10:6800/1386 'ceph-mds04' mds.0.0 up:standby-replay seq 1
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com