Depends on cluster size and how long you keep your cluster in a degraded state. Having ~64 GB available is a good idea Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Thu, May 9, 2019 at 12:25 PM Janne Johansson <icepic.dz@xxxxxxxxx> wrote: > > Den tors 9 maj 2019 kl 11:52 skrev Poncea, Ovidiu <Ovidiu.Poncea@xxxxxxxxxxxxx>: >> >> Hi folks, >> >> What is the commanded size for the ceph-mon data partitions? Is there a maximum limit to it? If not is there a way to limit it's growth (or celan it up)? To my knowledge ceph-mon doesn't use a lot of data (500MB - 1GB should be enough, but I'm not the expert here :) > > > Our long lived cluster mons have some 1.5G under /var/lib/ceph for monitors, we have given them 50-ish G on /var. > I think if you have missing/downed OSDs for a long while, they will retain info for replays which will make it grow a lot if that condition stays so you want some margin there. > >> >> We are working on the StarlingX project and need want to decide if user will ever need to resize this partition. If yes then we have to implement a good partition resize mechanism else we can leave it static and be done with it. > > > Just make it on an LVM volume so you can live-expand if needed. > > -- > May the most significant bit of your life be positive. > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com