How to replace an node in ceph?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for reply.
The new node is more powerful than the broken one, so this is a
hardware-upgrade process too, and I think replace an node may be a common
operation, so I want to explore a general-purpose method.


2014-09-04 21:15 GMT+08:00 Loic Dachary <loic at dachary.org>:

> Hi,
>
> If the new machine can host the disks of the former machine, it should be
> enough to
>
> a) install the new machine with ceph
> b) shutdown the old and new machines
> c) move the disks from the old machine to the new
> d) reboot the new machine
>
> and the OSDs will show as if nothing happened.
>
> Cheers
>
> P.S. I explored this idea last year and wrote a few notes at
> http://dachary.org/?p=2428
>
> On 04/09/2014 14:56, Ding Dinghua wrote:
> > Hi all,
> >         I'm new to ceph, and apologize if the question has been asked.
> >
> >         I have setup a 8-nodes ceph cluster, and after two months
> running, network controller of an node is broken, so I have to replace the
> node with an new one.
> >         I don't want to trigger data migration, since all I want to do
> is replacing a node, not shrink the cluster and then enlarge the cluster.
> >         I think the following steps may work:
> >         1)  set osd_crush_update_on_start to false, so when osd starts,
> it won't modify crushmap and trigger data migration.
> >           2)  set noout flags to prevent osds been kicked out of cluster
> and trigger data migration
> >           3)  mark all osds on the broken node down(actually, since
> network controller is broken, these osds are already down)
> >           4)  prepare osd on the new node, and keep osd_num the same
> with the osd on the broken node:
> >                ceph-osd -i [osd_num] --osd-data=path1 --mkfs
> >         5) start osd on the new node, and peering and backfilling work
> will be started automaticlly
> >         6)  wait until 5) complete, and repeat 4) and 5) until all osds
> on the broken node been moved to the new node
> >         I have done some test on my test cluster, and it seemed works,
> but I'm not quite sure it's right in theory, so any comments will be
> appreciated.
> >         Thanks.
> >
> > --
> > Ding Dinghua
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users at lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
> --
> Lo?c Dachary, Artisan Logiciel Libre
>
>


-- 
Ding Dinghua
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140905/aba6545d/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux