Re: Ceph behavior in case of network failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 30, 2012 at 9:13 PM, madhusudhan
<madhusudhana.u.acharya@xxxxxxxxx> wrote:
> Gregory Farnum <gregory.farnum <at> dreamhost.com> writes:
>
>>
>> On Sat, Jan 28, 2012 at 8:52 PM, Madhusudhan
>> <madhusudhana.u.acharya <at> gmail.com> wrote:
>> > I have configured ceph in centos5.6 after a
>> > very long fight. Now, i am in the way
>> > to evaluate the Ceph. Forget me if my
>> > question looks amateur. If we consider a
>> > situation where my core switch fails,
>> > resulting in n/w failure in entire data
>> > center. What happens to ceph cluster ?
>> > will it sustain this n/w failure and comes
>> > online when the n/w is restored ?
>>
>> Hmmm. If your network breaks horribly, you will probably need to
>> restart the daemons — once their communication breaks they'll start
>> marking each other down and the monitors will probably accept those
>> reports once the network starts working again. (Actually, maybe we
>> should update that so the monitors reject sufficiently old reports.)
>> But it will be a transient effect; restarting your machines will be
>> enough to restore service. :)
>> -Greg
>>
> Thank you Greg for the reply. Do we have to start both osd/mon
> deamons on all the nodes ? I one of the case, i rebooted my
> osd node (when it wan running to check the fault tolerance)
> and when it came online, its journal got corrupted. I had to
> reinitialize the node by erasing all data in it. And rebooting
>  the entire cluster (in case of n/w failure) doesn't seems to
>  be a good idea for me as clients will start mounting the
> cluster immediately and start writing or reading
> from the cluster.

Hmm, actually I checked with some coworkers and you shouldn't need to
restart anything at all — the OSDs will correct the report themselves.
So it should all be good! You'll likely experience some slowness while
the OSD states flap (up and down), but it will all be transparent to
the clients.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux