Re: ceph-mon store.db disk usage increase on OSD-Host fail

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 12.03.2020 schrieb Wido den Hollander:
> 
> 
> On 3/12/20 7:44 AM, Hartwig Hauschild wrote:
> > Am 10.03.2020 schrieb Wido den Hollander:
> >>
> >>
> >> On 3/10/20 10:48 AM, Hartwig Hauschild wrote:
> >>> Hi, 
> >>>
> >>> I've done a bit more testing ...
> >>>
> >>> Am 05.03.2020 schrieb Hartwig Hauschild:
> >>>> Hi, 
> >>>>
> > [ snipped ]
> >>> I've read somewhere in the docs that I should provide ample space (tens of
> >>> GB) for the store.db, found on the ML and Bugtracker that ~100GB might not
> >>> be a bad idea and that large clusters may require space on order of
> >>> magnitude greater.
> >>> Is there some sort of formula I can use to approximate the space required?
> >>
> >> I don't know about a formula, but make sure you have enough space. MONs
> >> are dedicated nodes in most production environments, so I usually
> >> install a 400 ~ 1000GB SSD just to make sure they don't run out of space.
> >>
> > That seems fair.
> >>>
> >>> Also: is the db supposed to grow this fast in Nautilus when it did not do
> >>> that in Luminous? Is that behaviour configurable somewhere?
> >>>
> >>
> >> The MONs need to cache the OSDMaps when not all PGs are active+clean
> >> thus their database grows.
> >>
> >> You can compact RocksDB in the meantime, but it won't last for ever.
> >>
> >> Just make sure the MONs have enough space.
> >>
> > Do you happen to know if that behaved differently in previous releases? I'm
> > just asking because I have not found anything about this yet and may need to
> > explain that it's different now.
> > 
> 
> It actually became better in recent releases. Nautilus didn't became worse.
> 
> Hammer and Jewel were very bad with this and they grew to hundreds of GB
> on large(r) clusters.
> 
> So no, I'm not aware of any changes.
> 
Fair enough. 
Disabling the insights-module as XuYun pointed out brought the
nautilus-cluster back to the same behaviour luminous is showing here, so
I'll check whether we really need the module and how to work around the
disk-usage.

-- 
Cheers,
	Hardy
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux