Re: newbie question: rebooting the whole cluster, powerfailure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 5, 2013 at 12:38 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
> On Thu, Sep 5, 2013 at 9:31 AM, Alfredo Deza <alfredo.deza@xxxxxxxxxxx> wrote:
>> On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
>> <bernhard.glomm@xxxxxxxxxxx> wrote:
>>>
>>> Hi all,
>>>
>>> as a ceph newbie I got another question that is probably solved long ago.
>>> I have my testcluster consisting two OSDs that also host MONs
>>> plus one to five MONs.
>>> Now I want to reboot all instance, simulating a power failure.
>>> So I shutdown the extra MONs,
>>> Than shutting down the first OSD/MON instance (call it "ping")
>>> and after shutdown is complete, shutting down the second OSD/MON
>>> instance (call it "pong")
>>> 5 Minutes later I restart "pong", than after I checked all services are
>>> up and running I restart "pong", afterwards I restart the MON that I brought
>>> down the at last, not the other MON though (since - surprise - they are in
>>> this test szenario just virtual instances residing on some ceph rbds)
>>>
>>> I think this is the wrong way to do it, since it brakes the cluster unrecoverable...
>>> at least that's what it seems, ceph tries to call one of the MONs that isn't there yet
>>> How to shut down and restart the whole cluster in a coordinated way in case
>>> of a powerfailure (need a script for our UPS)
>>>
>>> And a second question regarding ceph-deploy:
>>> How do I specify a second NIC/address to be used as the intercluster communication?
>>
>> You will not be able to do something like this with ceph-deploy. This
>> sounds like a very specific (or a bit more advanced)
>> configuration than what ceph-deploy offers.
>
> Actually, you can — when editing the ceph.conf (before creating any
> daemons) simply set public addr and cluster addr in whatever section
> is appropriate. :)

Oh, you are right! I was thinking about a flag in ceph-deploy for some reason :)

Sorry for the confusion!

> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux