Re: Enclosure power failure pausing client IO till all connected hosts up

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Max A. Krasilnikov,
Could you please explain why we need 3+ nodes in case of replication factor of 2 ?
My understanding is client io depends on min_size , which is 1 in this case.

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Max A. Krasilnikov
Sent: Monday, July 27, 2015 4:07 AM
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Enclosure power failure pausing client IO till all connected hosts up

Здравствуйте!

On Tue, Jul 07, 2015 at 02:21:56PM +0530, mallikarjuna.biradar wrote:

> Hi all,

> Setup details:
> Two storage enclosures each connected to 4 OSD nodes (Shared storage).
> Failure domain is Chassis (enclosure) level. Replication count is 2.
> Each host has allotted with 4 drives.

> I have active client IO running on cluster. (Random write profile with
> 4M block size & 64 Queue depth).

> One of enclosure had power loss. So all OSD's from hosts that are
> connected to this enclosure went down as expected.

> But client IO got paused. After some time enclosure & hosts connected
> to it came up.
> And all OSD's on that hosts came up.

> Till this time, cluster was not serving IO. Once all hosts & OSD's
> pertaining to that enclosure came up, client IO resumed.


> Can anybody help me why cluster not serving IO during enclosure
> failure. OR its a bug?

With replication factor 2 You have to take 3+ nodes in order to serve clients. If chooseleaf type > 0.

--
WBR, Max A. Krasilnikov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux