Thanks for the reply, but how can I fix this without an outage? I tired adding 'mon compact on start = true' but the monitor just hung. Unfortunately this is a production cluster and can't take the outages (I'm assuming the cluster will fail without a monitor). I had three monitors I was hit with the store.db bug and lost two of the three. I have tried running with 0.61.5, .0.61.7 and 0.67-rc2. None of them seem to shrink the DB. Nelson Jeppesen Disney Technology Solutions and Services Phone 206-588-5001 -----Original Message----- From: Mike Dawson [mailto:mike.dawson@xxxxxxxxxxxx] Sent: Thursday, August 01, 2013 4:10 PM To: Jeppesen, Nelson Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: Why is my mon store.db is 220GB? 220GB is way, way too big. I suspect your monitors need to go through a successful leveldb compaction. The early releases of Cuttlefish suffered several issues with store.db growing unbounded. Most were fixed by 0.61.5, I believe. You may have luck stoping all Ceph daemons, then starting the monitor by itself. When there were bugs, leveldb compaction tended work better without OSD traffic hitting the monitors. Also, there are some settings to force a compact on startup like 'mon compact on start = true' and mon compact on trim = true". I don't think either are required anymore though. See some history here: http://tracker.ceph.com/issues/4895 Thanks, Mike Dawson Co-Founder & Director of Cloud Architecture Cloudapt LLC 6330 East 75th Street, Suite 170 Indianapolis, IN 46250 On 8/1/2013 6:52 PM, Jeppesen, Nelson wrote: > My Mon store.db has been at 220GB for a few months now. Why is this > and how can I fix it? I have one monitor in this cluster and I suspect > that I can't add monitors to the cluster because it is too big. Thank you. > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com