Re: Massive Mon DB Size with noout on 14.2.11

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The important metric is the difference between these two values:

# ceph report | grep osdmap | grep committed
report 3324953770
    "osdmap_first_committed": 3441952,
    "osdmap_last_committed": 3442452,

The mon stores osdmaps on disk, and trims the older versions whenever
the PGs are clean. Trimming brings the osdmap_first_committed to be
closer to osdmap_last_committed.
In a cluster with no PGs backfilling or recovering, the mon should
trim that difference to be within 500-750 epochs.

If there are any PGs backfilling or recovering, then the mon will not
trim beyond the osdmap epoch when the pools were clean.

So if you are accumulating gigabytes of data in the mon dir, it
suggests that you have unclean PGs/Pools.

Cheers, dan




On Fri, Oct 2, 2020 at 4:14 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>
> Does this also count if your cluster is not healthy because of errors
> like '2 pool(s) have no replicas configured'
> I sometimes use these pools for testing, they are empty.
>
>
>
>
> -----Original Message-----
> Cc: ceph-users
> Subject:  Re: Massive Mon DB Size with noout on 14.2.11
>
> As long as the cluster is no healthy, the OSD will require much more
> space, depending on the cluster size and other factors. Yes this is
> somewhat normal.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux