In this thread [1] it is suggested to bump up mds log max segments = 200 mds log max expiring = 150 1- http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023490.html On Sun, Apr 28, 2019 at 2:58 PM Winger Cheng <wingerted@xxxxxxxxx> wrote: > > Hello Everyone, > > I have a CephFS cluster which has 4 node, every node has 5 HDD and 1 SSD. > I use bluestore and place the wal and db on ssd. also we get 50GB on each ssd for a metadata pool. > My workload is write 10 million file to 200 dirs at 200 client. > > When I use 1 mds I get 4k ops and everything works ok > > When I use 2 mds, I get 3k ops for both mds but mds log trim very slow , it always behind on trimming, > and my metadata pool be full very quickly since most of space of metadata pool is used by mds log. > But when I stop writing , all the mds log can be trimmed in 5 minutes. > > I'm using Ceph 13.2.5 Cephfs with kernel client , every client kernel version is 4.14.35 > > What's wrong ? > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com