Re: Crush Rules with multiple Device Classes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 19.07.2018 um 08:43 schrieb Linh Vu:
> Since the new NVMes are meant to replace the existing SSDs, why don't you assign class "ssd" to the new NVMe OSDs? That way you don't need to change the existing OSDs nor the existing crush rule. And the new NVMe OSDs won't lose any performance, "ssd" or "nvme" is just a name.
> 
> When you deploy the new NVMe, you can chuck this under [osd] in their local ceph.conf: `osd_class_update_on_start = false` They should then come up with a blank class and you can set the class to ssd afterwards. 

Right, this should also work. But then I'd prefer to "relabel" the existing SSDs and the crush rule to read "NVME" such that the future NVMEs will update themselves automatically
without manual configuration. We are trying to keep our ceph.conf small to follow the spirit of Mimic and future releases ;-). 
I'll schedule this change for our next I/O pause just to be on the safe side. 

Thanks and all the best,
	Oliver

> 
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> *From:* ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx>
> *Sent:* Thursday, 19 July 2018 6:13:25 AM
> *To:* ceph-users@xxxxxxxxxxxxxx
> *Cc:* Peter Wienemann
> *Subject:*  Crush Rules with multiple Device Classes
>  
> Dear Cephalopodians,
> 
> we use an SSD-only pool to store the metadata of our CephFS.
> In the future, we will add a few NVMEs, and in the long-term view, replace the existing SSDs by NVMEs, too.
> 
> Thinking this through, I came up with three questions which I do not find answered in the docs (yet).
> 
> Currently, we use the following crush-rule:
> --------------------------------------------
> rule cephfs_metadata {
>         id 1
>         type replicated
>         min_size 1
>         max_size 10
>         step take default class ssd
>         step choose firstn 0 type osd
>         step emit
> }
> --------------------------------------------
> As you can see, this uses "class ssd".
> 
> Now my first question is:
> 1) Is there a way to specify "take default class (ssd or nvme)"?
>    Then we could just do this for the migration period, and at some point remove "ssd".
> 
> If multi-device-class in a crush rule is not supported yet, the only workaround which comes to my mind right now is to issue:
>   $ ceph osd crush set-device-class nvme <old_ssd_osd>
> for all our old SSD-backed osds, and modify the crush rule to refer to class "nvme" straightaway.
> 
> This leads to my second question:
> 2) Since the OSD IDs do not change, Ceph should not move any data around by changing both the device classes of the OSDs and the device class in the crush rule - correct?
> 
> After this operation, adding NVMEs to our cluster should let them automatically join this crush rule, and once all SSDs are replaced with NVMEs,
> the workaround is automatically gone.
> 
> As long as the SSDs are still there, some tunables might not fit well anymore out of the box, i.e. the "sleep" values for scrub and repair, though.
> 
> Here my third question:
> 3) Are the tunables used for NVME devices the same as for SSD devices?
>    I do not find any NVME tunables here:
>    http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/
>    Only SSD, HDD and Hybrid are shown.
> 
> Cheers,
>         Oliver
> 


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux