Hi Ceph Users,
We plan to add 20 storage nodes to our existing cluster of 40 nodes, each node has 36 x 5.458 TiB drives. We plan to add the storage such that all new OSDs are prepared, activated and ready to take data but not until we start slowly increasing their weightings. We also expect this not to cause any backfilling before we adjust the weightings.
When testing the deployment on our development cluster, adding a new OSD to the host bucket with a crush weight of 5.458 and an OSD reweight of 0 (we have set “noin”) causes the acting sets of a few pools to change, thus triggering backfilling. Interestingly, none of the pool backfilling have the new OSD in their acting set.
This was not what we expected, so I have to ask, is what we are trying to achieve possible and how best we should go about doing it.
_______________________________________________
Commands run:
ceph osd crush add osd.43 0 host=ceph-sn833 - causes no backfilling
ceph osd crush add osd.44 5.458 host=ceph-sn833 - does cause backfilling
For multiple hosts and OSDs, we plan to prepare a new crushmap and inject that into the cluster.
Best wishes,
Bruno
Bruno Canning
LHC Data Store System Administrator
Scientific Computing Department
STFC Rutherford Appleton Laboratory
Harwell Oxford
Didcot
OX11 0QX
Tel. +44 ((0)1235) 446621
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com