Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick,

The event log size of 3/5 MDS is also very high, still. mds.1, mds.3,
and mds.4 report between 4 and 5 million events, mds.0 around 1.4
million and mds.2 between 0 and 200,000. The numbers have been constant
since my last MDS restart four days ago.

I ran your ceph-gather.sh script a couple of times, but dumps only
mds.0. Should I modify it to dump mds.3 instead so you can have a look?
Yes, please.

The session load on mds.3 had already resolved itself after a few days, so I cannot reproduce it any more. Right now, mds.0 has the highest load and a steadily growing event log, but it's not crazy (yet). Nonetheless, I've sent you my dumps with upload ID b95ee882-21e1-4ea1-a419-639a86acc785. The older dumps are from when mds.3 was under load, but they are all from mds.0. I also attached a newer batch, which I created just a few minutes ago.

Janek


--

Bauhaus-Universität Weimar
Bauhausstr. 9a, R308
99423 Weimar, Germany

Phone: +49 3643 58 3577
www.webis.de
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux