Pools and classes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all

I have a ceph cluster where so far all OSDs have been rotational hdd disks
(actually there are some SSDs, used only for block.db and wal.db)

I now want to add some SSD disks to be used as OSD. My use case is:

1) for the existing pools keep using only hdd disks
2) create some new pools using only sdd disks


Let's start with 1 (I didn't have added yet the ssd disks in the cluster)

I have some replicated pools and some ec pools. The replicated pools are
using a replicated_ruleset rule [*].
I created a new "replicated_hdd" rule [**] using the command:

ceph osd crush rule create-replicated replicated_hdd default host hdd

I then changed the crush rule of a existing pool (that was using
'replicated_ruleset') using the command:


ceph osd pool set  <poolname> crush_rule replicated_hdd

This triggered the remapping of some pgs and therefore some data movement.
Is this normal/expected, since for the time being I have only hdd osds ?

Thanks, Massimo



[*]
rule replicated_ruleset {
        id 0
type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}

[**]
rule replicated_hdd {
        id 7
type replicated
        min_size 1
        max_size 10
        step take default class hdd
        step chooseleaf firstn 0 type host
        step emit
}
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux