Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21.09.2021 09:11, Kobi Ginon wrote:
for sure the balancer affects the status

Of course, but setting several PG to degraded is something else.


i doubt that your customers will be writing so many objects in the same
rate of the Test.

I only need 2 host running rados bench to get several PG in degrade state.


maybe you need to play with the balancer configuration a bit.

Maybe, but a balancer should not set the cluster health to warning with several PG in degraded state. It should be possible to do this cleanly, copy data and delete the source when copy is OK.


Could start with this
The balancer mode can be changed to crush-compat mode, which is backward
compatible with older clients, and will make small changes to the data
distribution over time to ensure that OSDs are equally utilized.
https://docs.ceph.com/en/latest/rados/operations/balancer/

I will probably just turn it off before I set the cluster in production.


side note: i m using indeed an old version of ceph ( nautilus)+ blancer
configured
and runs rado benchmarks , but did not saw such a problem.
on the other hand i m not using pg_autoscaler
i set the pools PG number in advanced according to assumption of the
percentage each pool will be using
Could be that you do use this Mode and the combination of auto scaler and
balancer is what reveals this issue

If you look at my initial post you will se that the pool is created with --autoscale-mode=off The cluster is running 16.2.5 and is empty except for one pool with one PG created by Cephadm.


--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux