Re: Need urgent help for ceph health error issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/9/21 09:30, Md. Hejbul Tawhid MUNNA wrote:
Hi,

Yes, min_size=1 and size=2 for ssd

for hdd it is min_size=1 and size=3

Could you please advice, about using hdd and ssd in a same ceph cluster. Is it okay for production grade openstack?

Sure. But you want min_size=2 and size=3 for both pools, really. Search for discussions about that on the ceph user mailing list (and why you would want that for production).

We have created a new replicated rule for ssd, different pool for ssd and new disk marking ssd class.

not idea about ceph-balancer

That probably needs fixing;-). I would start with reading this: https://docs.ceph.com/en/latest/rados/operations/balancer/

Check if you have it enabled. If not test what "ceph balancer eval" would advise. Enable it and see how it would impove things. Ideally you would want to use mode upmap (moving PGs intead of reweighting OSDs). In this state the balancer might not want to make any adjustments and you might need to change the osd_full_ratio and osd_backfillfull_ratio. But the tools I listed in my previous email might even work without that. You don't want to put your cluster over the edge (you are already close to the edge).

The amount of PGs per OSD seem *really* high. The Ceph Storage Cluster has a default maximum value of 300 placement groups per OSD. Have you tuned mon_max_pg_per_osd?

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux