Re: Ceph Multi Mds Trim Log Slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have already set mds log max segments to 256 and in 13.2.5 mds log max expiring is not needed, since https://github.com/ceph/ceph/pull/18624

Serkan Çoban <cobanserkan@xxxxxxxxx> 于2019年4月28日周日 下午9:03写道:
In this thread [1] it is suggested to bump up
mds log max segments = 200
mds log max expiring = 150

1- http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023490.html

On Sun, Apr 28, 2019 at 2:58 PM Winger Cheng <wingerted@xxxxxxxxx> wrote:
>
> Hello Everyone,
>
> I have a CephFS  cluster which has 4 node, every node has 5 HDD and 1 SSD.
> I use bluestore and place the wal and db on ssd. also we get 50GB on each ssd for a metadata pool.
> My workload is write 10 million file to 200 dirs at 200 client.
>
> When I use 1 mds I get 4k ops and everything works ok
>
> When I use 2 mds, I get 3k ops for both mds but mds log trim very slow , it always behind on trimming,
> and my metadata pool be full very quickly since most of space of metadata pool is used by mds log.
> But when I stop writing , all the mds log can be trimmed in 5 minutes.
>
> I'm using Ceph 13.2.5 Cephfs with kernel client , every client kernel version is 4.14.35
>
> What's wrong ?
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux