Re: Change crush rule on pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 Can i do that - when the SSDs are allready used in another crush rule - backing and kvm_ssd rbd’s?

Jesper



Sent from myMail for iOS


Saturday, 12 September 2020, 11.01 +0200 from anthony.datri@xxxxxxxxx  <anthony.datri@xxxxxxxxx>:
>If you have capacity to have both online at the same time, why not add the SSDs to the existing pool, let the cluster converge, then remove the HDDs?  Either all at once or incrementally?  With care you’d have zero service impact.  If you want to change the replication strategy at the same time, that would be more complex.
>
>— Anthony
>
>> On Sep 12, 2020, at 12:42 AM,  jesper@xxxxxxxx wrote:
>> 
>>> I would like to change the crush rule so data lands on ssd instead of hdd,
>>> can this be done on the fly and migration will just happen or do I need to
>>> do something to move data?
>> 
>> I would actually like to relocate my object store to a new storage tier.
>> Is the best to:
>> 
>> 1) create new pool on storage tier (SSD)
>> 2) stop activity
>> 3) rados cppool data to the new one.
>> 4) rename the pool back into the "default.rgw.buckets.data" pool.
>> 
>> Done?
>> 
>> Thanks.
>> _______________________________________________
>> ceph-users mailing list --  ceph-users@xxxxxxx
>> To unsubscribe send an email to  ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux