Re: Re-install host OS on Ceph OSD node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

there's no need to drain the OSDs before adding them back. Since ceph-volume is in place you could scan and activate existing OSDs on a reinstalled node. In cephadm there's a similar way [1]:

ceph cephadm osd activate <host>...

If you want to prevent the OSDs from being marked out you can run:

ceph osd add-noout <host>

This will put only the OSDs on this node in noout, the rest of the cluster will rebalance in case other OSDs fail during that time. I'm not sure if the maintenance mode [2] will do that for you, I haven't used that myself yet.

[1] https://docs.ceph.com/en/latest/cephadm/services/osd/?highlight=reinstalled#activate-existing-osds
[2] https://docs.ceph.com/en/latest/cephadm/host-management/#maintenance-mode

Zitat von Geoffrey Rhodes <geoffrey@xxxxxxxxxxxxx>:

Good day

I have a ceph osd node with an OS drive that has errors and may soon fail.
There are 8 x 18TB drives installed in this node.
The journal's for each drive are co-located on each drive.

I'd like to replace the falling OS drive, re-install the OS (same node name
and IP addressing), push the admin keys and conf to the node again and
re-activate the eight storage drives.
Is this possible without affecting the crushmap and data distribution?

In the past I would have set the weight of each drive to 0 and then wait
for the data to backfill elsewhere, then purge the drives and node from the
cluster.
Then start over, installing the node and adding it to the correct crush
bucket, etc.
This feels like an unnecessary course of action when all I need to do is
replace the OS drive.

OS: Ubuntu 18.04.6 LTS
Ceph version: 15.2.17 - Octopus


Kind regards
Geoffrey Rhodes
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux