Crush maps : split the root in two parts on an OSD node with same disks ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120 disks are strictly identical (model and size).
(The cluster is also composed of 3 MON servers on 3 other machines)

For design reason, I would like to separate my cluster storage into 2 pools of 60 disks.

My idea is to modify the crushmap on each node in order to split the root top hierarchy in two groups, ie 10 disks of each OSD node for the first pool and the 10 others disks of each nodes for the second pool.

I already did that on another cluster with 2 sets of disks of different technology (HDD vs SSD) inspiring by : https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/

But is it relevant to do that when we have a set of identical disks ?

Thanks in advance for your advice,
Hervé

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux