Re: Massive Mon DB Size with noout on 14.2.11

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hmm, in that case the osdmaps do not explain your high mon disk usage.
You'll have to investigate further...

-- dna


On Fri, Oct 2, 2020 at 5:26 PM Andreas John <aj@xxxxxxxxxxx> wrote:
>
> Hello *,
>
> thx for taking care. I read "works as designed, be sure to have disk
> space for the mon available". It sounds a little odd that the growth
> from 50MB to ~15GB + compaction space happens within a couple of
> seconds, when two OSD rejoin the cluster. Does it matter if I have
> cephfs in use? Usually I would expect to have MDS load, but does it also
> cause load on the mon with many files?
>
> My OSD map seems to have low absolute numbers:
>
> ceph report | grep osdmap | grep committed
> report 777999536
>     "osdmap_first_committed": 1276,
>     "osdmap_last_committed": 1781,
>
>
> If a get new disks (partitions) for the mons, is there a size
> recommendation? Is there a rule of thumb? BTW: Do I still need a
> filesystem for the partition of the mon DB?
>
> Beste Regards,
>
> derjohn
>
>
> On 02.10.20 16:25, Dan van der Ster wrote:
> > The important metric is the difference between these two values:
> >
> > # ceph report | grep osdmap | grep committed
> > report 3324953770
> >     "osdmap_first_committed": 3441952,
> >     "osdmap_last_committed": 3442452,
> >
> > The mon stores osdmaps on disk, and trims the older versions whenever
> > the PGs are clean. Trimming brings the osdmap_first_committed to be
> > closer to osdmap_last_committed.
> > In a cluster with no PGs backfilling or recovering, the mon should
> > trim that difference to be within 500-750 epochs.
> >
> > If there are any PGs backfilling or recovering, then the mon will not
> > trim beyond the osdmap epoch when the pools were clean.
> >
> > So if you are accumulating gigabytes of data in the mon dir, it
> > suggests that you have unclean PGs/Pools.
> >
> > Cheers, dan
> >
> >
> >
> >
> > On Fri, Oct 2, 2020 at 4:14 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
> >>
> >> Does this also count if your cluster is not healthy because of errors
> >> like '2 pool(s) have no replicas configured'
> >> I sometimes use these pools for testing, they are empty.
> >>
> >>
> >>
> >>
> >> -----Original Message-----
> >> Cc: ceph-users
> >> Subject:  Re: Massive Mon DB Size with noout on 14.2.11
> >>
> >> As long as the cluster is no healthy, the OSD will require much more
> >> space, depending on the cluster size and other factors. Yes this is
> >> somewhat normal.
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> --
> Andreas John
> net-lab GmbH  |  Frankfurter Str. 99  |  63067 Offenbach
> Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832
> Tel: +49 69 8570033-1 | Fax: -2 | http://www.net-lab.net
>
> Facebook: https://www.facebook.com/netlabdotnet
> Twitter: https://twitter.com/netlabdotnet
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux