Re: Adding device class to CRUSH rule without data movement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Aha, that's what I was looking for! And indeed, it seems to do exactly
what I thought of doing, just move the bucket IDs. I mistakenly thought
this functionality was for people who already had multiple parallel
hierarchies in their crush tree, but it also works for a single default
hierarchy.

This did the trick:

crushtool -i crush.old --reclassify --reclassify-root default hdd -o crush

Thanks!

On 2025/03/14 23:17, Eugen Block wrote:
> The crushtool would do that with the --reclassify flag. There was a  
> thread here on this list a couple of months ago. I’m on my mobile, I  
> don’t have a link for you right now. But the docs should also contain  
> some examples, if I’m not mistaken.
> 
> 
> Zitat von Hector Martin <marcan@xxxxxxxxx>:
> 
>> Hi,
>>
>> I have an old Mimic cluster that I'm doing some cleanup work on and
>> adding SSDs, before upgrading to a newer version.
>>
>> As part of adding SSDs, I need to switch the existing CRUSH rules to
>> only use the HDD device class first. Is there some way of doing this
>> that doesn't result in 100% data movement?
>>
>> Simply replacing `step take default` with `step take default class hdd`
>> for every CRUSH rule seems to completely shuffle the cluster data. I
>> tried manually specifying the bucket IDs for the hdd hierarchy to at
>> least be in the same order as the bucket IDs for the primary hierarchy,
>> hoping having them sort the same would end up with the same data
>> distribution, but that didn't work either.
>>
>> Is there some magic incantation to swap around the CRUSH rules/tree so
>> that it results in exactly the same data distribution after adding the
>> hdd class constraint? The set of potential OSDs should be identical
>> (there are no SSDs yet), so the data movement seems to be some
>> technicality of the CRUSH implementation... perhaps completely switching
>> around the main id and hdd-class id of all the buckets would do it? (I'm
>> a little afraid to mess with the main ids in a production cluster...).
>>
>> This cluster is already having I/O load issues (that's part of why I'm
>> adding SSDs), so I'd really like to avoid a total data shuffle if possible.
>>
>> Thanks,
>> - Hector
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

- Hector
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux