Thanks!
Of course, I know about osd weights and ability to adjust them to make distribution
more-or-less unified. And we use ceph-deploy to bring up osds and already
noticed, that weights of different sized osds are choose proportionally their sizes.
But the question is slightly about different thing - what variant (whole 2 tb nodes + whole 1 tb nodes OR all nodes have 6x2+6x1 tb) will give more unified distribution of used space
and possibly more unified IO load to nodes by default, i.e. without hand-tuning crushmap
and weights? And also, what variant will better survive at least full single node failure?
Indeed, even with osds of the same size, but different count per node, we face
"backfilltoofull" situations rather often. For example, during migration from 3OSDs
"proof of concept" nodes to 12OSDs pre-production nodes there will be a plenty
of room on newer 12OSDs nodes, but space shortage on old 3OSDs. And we have
only a single solution - temporarily add some 2-3OSDs nodes to cluster as a "helpers",
and remove them after rebalancing was near complete.
Директор по информационным
технологиям и операциям
федеральной сети супермаркетов
"Уютерра"
megov@xxxxxxxxxx
megov@xxxxxxx
+7 915 855 3139
+7 4742 762 909
Отправлено: 15 января 2015 г. 10:41
Кому: Межов Игорь Александрович
Копия: ceph-users@xxxxxxxxxxxxxx >> Ceph Users
Тема: Re: Better way to use osd's of different size
you should weight the OSD so it's represent the size (like an weight of 3.68 for an 4TB HDD).
cephdeploy do this automaticly.
Nevertheless also with the correct weight the disk was not filled in equal distribution. For that purposes you can use reweight for single OSDs, or automaticly with "ceph osd reweight-by-utilization".
Udo
Hi!
We have a small production ceph cluster, based on firefly release.
It was built using hardware we already have in our site so it is not "new & shiny",
but works quite good. It was started in 2014.09 as a "proof of concept" from 4 hosts
with 3 x 1tb osd's each: 1U dual socket Intel 54XX & 55XX platforms on 1 gbit network.
Now it contains 4x12 osd nodes on shared 10Gbit network. We use it as a backstore
for running VMs under qemu+rbd.
During migration we temporarily use 1U nodes with 2tb osds and already face some
problems with uneven distribution. I know, that the best practice is to use osds of same
capacity, but it is impossible sometimes.
Now we have 24-28 spare 2tb drives and want to increase capacity on the same boxes.
What is the more right way to do it:
- replace 12x1tb drives with 12x2tb drives, so we will have 2 nodes full of 2tb drives and
other nodes remains in 12x1tb confifg
- or replace 1tb to 2tb drives in more unify way, so every node will have 6x1tb + 6x2tb drives?
I feel that the second way will give more smooth distribution among the nodes, and
outage of one node may give lesser impact on cluster. Am I right and what you can
advice me in such a situation?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com