Re: Ceph cluster out of balance after adding OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


On 27.03.23 16:04, Pat Vaughan wrote:

we looked at the number of PGs for that pool, and found that there was only
1 for the and rgw.log pools, and "osd pool autoscale-status"
doesn't return anything, so it looks like that hasn't been working.

If you are in this situation, have a look at the crush rules of your pools. If the cluster has multiple device classes (hdd, ssd) then all pools need to use just one device class each.

The autoscaler currently does not work when one pool uses just one device class and another pool uses the default crush rule and therefor multiple device classes.

Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux