Re: "store is getting too big" on monitors after Firefly to Giant upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The mons have grown another 30GB each overnight (except for 003?), which is quite worrying.  I ran a little bit of testing yesterday after my post, but not a significant amount.

I wouldn’t expect compact on start to help this situation based on the name since we don’t (shouldn’t?) restart the mons regularly, but there appears to be no documentation on it.  We’re pretty good on disk space on the mons currently, but if that changes, I’ll probably use this to see about bringing these numbers in line.

:: ~ » ceph health detail | grep 'too big'
HEALTH_WARN mon.cluster4-monitor001 store is getting too big! 77365 MB >= 15360 MB; mon.cluster4-monitor002 store is getting too big! 87868 MB >= 15360 MB; mon.cluster4-monitor003 store is getting too big! 30359 MB >= 15360 MB; mon.cluster4-monitor004 store is getting too big! 93414 MB >= 15360 MB; mon.cluster4-monitor005 store is getting too big! 88232 MB >= 15360 MB
mon.cluster4-monitor001 store is getting too big! 77365 MB >= 15360 MB -- 72% avail
mon.cluster4-monitor002 store is getting too big! 87868 MB >= 15360 MB -- 70% avail
mon.cluster4-monitor003 store is getting too big! 30359 MB >= 15360 MB -- 85% avail
mon.cluster4-monitor004 store is getting too big! 93414 MB >= 15360 MB -- 69% avail
mon.cluster4-monitor005 store is getting too big! 88232 MB >= 15360 MB -- 71% avail
--
Kevin Sumner



On Dec 9, 2014, at 6:20 PM, Haomai Wang <haomaiwang@xxxxxxxxx> wrote:

Maybe you can enable "mon_compact_on_start=true" when restarting mon,
it will compact data

On Wed, Dec 10, 2014 at 6:50 AM, Kevin Sumner <kevin@xxxxxxxxx> wrote:
Hi all,

We recently upgraded our cluster to Giant from.  Since then, we’ve been
driving load tests against CephFS.  However, we’re getting “store is getting
too big” warnings from the monitors and the mons have started consuming way
more disk space, 40GB-60GB now as opposed to ~10GB pre-upgrade.  Is this
expected?  Is there anything I can do to ease the store’s size?

Thanks!

:: ~ » ceph status
   cluster f1aefa73-b968-41e0-9a28-9a465db5f10b
    health HEALTH_WARN mon.cluster4-monitor001 store is getting too big!
45648 MB >= 15360 MB; mon.cluster4-monitor002 store is getting too big!
56939 MB >= 15360 MB; mon.cluster4-monitor003 store is getting too big!
28647 MB >= 15360 MB; mon.cluster4-monitor004 store is getting too big!
60655 MB >= 15360 MB; mon.cluster4-monitor005 store is getting too big!
57335 MB >= 15360 MB
    monmap e3: 5 mons at
{cluster4-monitor001=17.138.96.12:6789/0,cluster4-monitor002=17.138.96.13:6789/0,cluster4-monitor003=17.138.96.14:6789/0,cluster4-monitor004=17.138.96.15:6789/0,cluster4-monitor005=17.138.96.16:6789/0},
election epoch 34938, quorum 0,1,2,3,4
cluster4-monitor001,cluster4-monitor002,cluster4-monitor003,cluster4-monitor004,cluster4-monitor005
    mdsmap e6538: 1/1/1 up {0=cluster4-monitor001=up:active}
    osdmap e49500: 501 osds: 470 up, 469 in
     pgmap v1369307: 98304 pgs, 3 pools, 4933 GB data, 1976 kobjects
           16275 GB used, 72337 GB / 93366 GB avail
              98304 active+clean
 client io 3463 MB/s rd, 18710 kB/s wr, 7456 op/s
--
Kevin Sumner
kevin@xxxxxxxxx




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Best Regards,

Wheat

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux