Den tis 12 juni 2018 kl 15:06 skrev Hervé Ballans <herve.ballans@xxxxxxxxxxxxx>:
Hi all,
I have a cluster with 6 OSD nodes, each has 20 disks, all of the 120
disks are strictly identical (model and size).
(The cluster is also composed of 3 MON servers on 3 other machines)
For design reason, I would like to separate my cluster storage into 2
pools of 60 disks.
My idea is to modify the crushmap on each node in order to split the
root top hierarchy in two groups, ie 10 disks of each OSD node for the
first pool and the 10 others disks of each nodes for the second pool.
I already did that on another cluster with 2 sets of disks of different
technology (HDD vs SSD) inspiring by :
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
But is it relevant to do that when we have a set of identical disks ?
You could do it this way, but you could also just run two pools over the same 120 OSD disks.
Perhaps if you stated the end goal you are trying to reach, it would be easier to figure out an
answer to if it's relevant or not?
The storage admin in my thinks you spread load and risk better if all 120 disks get used for
both pools, but you might have a specific reason and if so, may we know it?
May the most significant bit of your life be positive.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com