Re: newbie question: rebooting the whole cluster, powerfailure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
<bernhard.glomm@xxxxxxxxxxx> wrote:
>
> Hi all,
>
> as a ceph newbie I got another question that is probably solved long ago.
> I have my testcluster consisting two OSDs that also host MONs
> plus one to five MONs.
> Now I want to reboot all instance, simulating a power failure.
> So I shutdown the extra MONs,
> Than shutting down the first OSD/MON instance (call it "ping")
> and after shutdown is complete, shutting down the second OSD/MON
> instance (call it "pong")
> 5 Minutes later I restart "pong", than after I checked all services are
> up and running I restart "pong", afterwards I restart the MON that I brought
> down the at last, not the other MON though (since - surprise - they are in
> this test szenario just virtual instances residing on some ceph rbds)
>
> I think this is the wrong way to do it, since it brakes the cluster unrecoverable...
> at least that's what it seems, ceph tries to call one of the MONs that isn't there yet
> How to shut down and restart the whole cluster in a coordinated way in case
> of a powerfailure (need a script for our UPS)
>
> And a second question regarding ceph-deploy:
> How do I specify a second NIC/address to be used as the intercluster communication?

You will not be able to do something like this with ceph-deploy. This
sounds like a very specific (or a bit more advanced)
configuration than what ceph-deploy offers.


>
> TIA
>
> Bernhard
>
>
>
> --
> ________________________________
> Bernhard Glomm
> IT Administration
>
> Phone: +49 (30) 86880 134
> Fax: +49 (30) 86880 100
> Skype: bernhard.glomm.ecologic
> Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> ________________________________
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux