Hi, our procedure is usually(assured that the cluster was ok the
failure, with 2 replicas as crush rule)
1.Stop the OSD process(to keep it from coming up and down and putting
load on the cluster)
2. Wait for the "Reweight" to come to 0(happens after 5 min i think -
can be set manually but i let it happen by itself)
3. remove the osd from cluster(ceph auth del, ceph osd crush remove,
ceph osd rm)
4. note down the journal partitions if needed
5. umount drive, replace the disk with new one
6. ensure permissions are set to ceph:ceph in /dev
7. mklabel gpt on the new drive
8. create the new osd with ceph-disk prepare(automatically adds it to
the crushmap)
your procedure sounds reasonable to me, as far as i'm concerned you
shouldn't have to wait for rebalancing after you remove the osd. all
this might not be 100% per ceph books but it works for us :)
Josef
On 06/08/18 16:15, Iztok Gregori wrote:
Hi Everyone,
Which is the best way to replace a failing (SMART Health Status:
HARDWARE IMPENDING FAILURE) OSD hard disk?
Normally I will:
1. set the OSD as out
2. wait for rebalancing
3. stop the OSD on the osd-server (unmount if needed)
4. purge the OSD from CEPH
5. physically replace the disk with the new one
6. with ceph-deploy:
6a zap the new disk (just in case)
6b create the new OSD
7. add the new osd to the crush map.
8. wait for rebalancing.
My questions are:
- Is my procedure reasonable?
- What if I skip the #2 and instead to wait for rebalancing I directly
purge the OSD?
- Is better to reweight the OSD before take it out?
I'm running a Luminous (12.2.2) cluster with 332 OSDs, failure domain
is host.
Thanks,
Iztok
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com