Re: Removing an OSD host server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The hosts got put there because OSDs started for the first time on a server with that name. If you name the new servers identically to the failed ones, the new osds will just place themselves under the host in the crush map and everything will be fine. There shouldn't be any problems with that based on what you've said of the situation.


On Fri, Dec 22, 2017, 9:00 PM Brent Kennedy <bkennedy@xxxxxxxxxx> wrote:

Been looking around the web and I cant find a what seems to be “clean way” to remove an OSD host from the “ceph osd tree” command output.  I am therefore hesitant to add a server with the same name, but I still see the removed/failed nodes from the list.  Anyone know how to do that?  I found an article here, but it doesn’t seem to be a clean way:  https://arvimal.blog/2015/05/07/how-to-remove-a-host-from-a-ceph-cluster/

 

Regards,

-Brent

 

Existing Clusters:

Test: Jewel with 3 osd servers, 1 mon, 1 gateway

US Production: Firefly with 4 osd servers, 3 mons, 3 gateways behind haproxy LB

UK Production: Hammer with 5 osd servers, 3 mons, 3 gateways behind haproxy LB

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux