Re: 2 odf 3 nodes down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/12/2013 10:57 AM, Oliver Liebel wrote:
hello,

i try to get a setup working, where i shut down 2 of 3 nodes,
and ceph should keep still working, e.g. like a  raid 1 + spare.

setup is done under precise 12.04 latest patchlevel, ceph version is
0.56.2,
every node acts als mon, mds an osd, using default pools.

i tried different variants, changing replication level (rep size) for
all (default) pools
(data, metadata, rbd)  from 2 to 3, min_size to 0  (i know this isnt a
good idea,

I'm not sure if min_size 0 actually works. (Should it be a valid value?) Have you tried setting it to "1".

Also, how does your crushmap look like, that's interesting to see.

What does ceph -s say when you shut down the second node?

Wido

just a proof of concept)
but the result is always the same:
if i shut down the first node, everything keeps working,
but after shutdown of the second node, ceph stops working, mount is
unreachable.

any ideas?

thanks in advance
oliver

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux