Re: Ceph cluster out of balance after adding OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27.03.23 23:13, Pat Vaughan wrote:
Looking at the pools, there are 2 crush rules. Only one pool has a meaningful amount of data, the  charlotte.rgw.buckets.data pool. This is the crush rule for that pool.

So that pool uses the device class ssd explicitely where the other pools do not care about the device class.

The autoscaler is not able to cope with this situation.

charlotte.rgw.buckets.data is an erasure coded pool, correct? And the rule was created automatically when you created the erasure coding profile.

You should create an erasure coding rule that does not care about the device class and assign it to the pool charlotte.rgw.buckets.data.
After that the autoscaler will be able to work again.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux