Re: Need urgent help for ceph health error issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/9/21 13:01, Md. Hejbul Tawhid MUNNA wrote:
Hi,

This is ceph.conf during the cluster deploy. ceph version is mimic.

osd pool default size = 3
osd pool default min size = 1
osd pool default pg num = 1024
osd pool default pgp num = 1024
osd crush chooseleaf type = 1
mon_max_pg_per_osd = 2048
mon_allow_pool_delete = true
mon_pg_warn_min_per_osd = 0
mon_pg_warn_max_per_osd = 0
osd_max_pg_per_osd_hard_ratio = 8


> osd pool default size = 3
> osd pool default min size = 2

> osd pool default pg num = 32
> osd pool default pgp num = 32

^^ It depends a lot on how many pools you plan to make. And how many OSDs. Do net set it too high. It will lead to high mem usage of OSDs and will bring you into trouble when Ceph has to do recovery.

> mon_allow_pool_delete = false

^^ Do not accidentally allow pools to be deleted

mon_max_pg_per_osd = 200


Basically don't touch the defaults and set it to crazy high or crazy low values.

This seems almost like a deliberate attempt at making the Ceph cluster a time bomb.

Your cluster is full. Don't make any of those changes now. First make sure you have enough capacity in your cluster and / or add OSDs. Then fix the size=3 and min_size=2.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux