Re: ceph stability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 20.12.2012 15:31, schrieb Mark Nelson:
> On 12/20/2012 01:08 AM, Roman Hlynovskiy wrote:
>> Hello Mark,
>>
>> for multi-mds solutions do you refer to multi-active arch or 1 active
>> and many standby arch?
> 
> That's a good question!  I know we don't really recommend multi-active
> right now for production use.  Not sure what our current recommendations
> are for multi-standby.  As far as I know it's considered to be more
> stable.  I'm sure Greg or Sage can chime in with a more accurate
> assessment.

We have been testing a lot with multi-standby, because a single MDS does
not make a lot of sense in a cluster. Maybe the clue is to have only one
standby, making SPOF a DPOF?

Up to 0.55 with some of Yan's fixes, Ceph has been too instable whenever
it came to MDS failover. As long as the same MDS has been kept active,
it was quite stable.

We are really hoping for 0.56 with the additional MDS fixes and also
fixed kernel 3.8 code. We have been waiting to replace our existing
cluster fs with CephFS for at least one and a half years, but it has
never been stable enough in our setup with standby MDS and takeover, let
alone multi-active. We do not want to risk a complete cluster failure.

Amon Ott
-- 
Dr. Amon Ott
m-privacy GmbH           Tel: +49 30 24342334
Am Köllnischen Park 1    Fax: +49 30 99296856
10179 Berlin             http://www.m-privacy.de

Amtsgericht Charlottenburg, HRB 84946

Geschäftsführer:
 Dipl.-Kfm. Holger Maczkowsky,
 Roman Maczkowsky

GnuPG-Key-ID: 0x2DD3A649

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux