Yes, this is an EC pool, and it was created automatically via the dashboard. Will this help to correct my current situation? Currently, there are 3 OSDs out of 12 that are about 90% full. One of them just crashed and will not come back up with "bluefs _allocate unable to allocate 0x80000 on bdev 1, allocator name block, allocator type hybrid, capacity 0x31ffc00000, block size 0x1000, alloc size 0x10000, free 0x55d212000, fragmentation 0.795245, allocated 0x0". On Tue, Mar 28, 2023 at 3:46 AM Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx> wrote: > On 27.03.23 23:13, Pat Vaughan wrote: > > Looking at the pools, there are 2 crush rules. Only one pool has a > > meaningful amount of data, the charlotte.rgw.buckets.data pool. This is > > the crush rule for that pool. > > So that pool uses the device class ssd explicitely where the other pools > do not care about the device class. > > The autoscaler is not able to cope with this situation. > > charlotte.rgw.buckets.data is an erasure coded pool, correct? And the > rule was created automatically when you created the erasure coding profile. > > You should create an erasure coding rule that does not care about the > device class and assign it to the pool charlotte.rgw.buckets.data. > After that the autoscaler will be able to work again. > > Regards > -- > Robert Sander > Heinlein Consulting GmbH > Schwedter Str. 8/9b, 10119 Berlin > > https://www.heinlein-support.de > > Tel: 030 / 405051-43 > Fax: 030 / 405051-19 > > Amtsgericht Berlin-Charlottenburg - HRB 220009 B > Geschäftsführer: Peer Heinlein - Sitz: Berlin > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx