Re: Slow initial boot of OSDs in large cluster with unclean state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/1/25 06:45, Stillwell, Bryan wrote:
> ceph report 2>/dev/null | jq '(.osdmap_last_committed -
> .osdmap_first_committed)'
> 
> This number should be between 500-1000 on a healthy cluster.  I've seen
> this as high as 4.8 million before (roughly 50% of the data stored on
> the cluster ended up being osdmaps!)

Yes, ours is and has been healthy for a while... but we didn't start
monitoring it until a few months ago, so it may relate to those slower
startups.

> This appears to be a bug that should be fixed in the latest releases of
> Ceph (Quincy 17.2.8 & Reef 18.2.4) based on this report:
> 
> https://tracker.ceph.com/issues/63883

Thanks, good to know! We'll get to 17.2.8 in the next couple of weeks,
then 18.x later this year.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux