redundancy with 2 nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Is it possible to achieve redundancy with 2 nodes only?

cephadmin@ceph1:~$ ceph osd tree
# id    weight  type name       up/down reweight
-1      10.88   root default
-2      5.44            host ceph1
0       2.72                    osd.0   up      1
1       2.72                    osd.1   up      1
-3      5.44            host ceph2
2       2.72                    osd.2   up      1
3       2.72                    osd.3   up      1

cephadmin@ceph1:~$ ceph status
    cluster bce2ff4d-e03b-4b75-9b17-8a48ee4d7788
     health HEALTH_OK
monmap e1: 2 mons at {ceph1=192.168.30.21:6789/0,ceph2=192.168.30.22:6789/0}, election epoch 12, quorum 0,1 ceph1,ceph2
     mdsmap e7: 1/1/1 up {0=ceph1=up:active}, 1 up:standby
     osdmap e88: 4 osds: 4 up, 4 in
      pgmap v2051: 1280 pgs, 5 pools, 13184 MB data, 3328 objects
            26457 MB used, 11128 GB / 11158 GB avail
                1280 active+clean

I would expect that if I shut down one node, the system will keep running. But when I tested it, I cannot even execute "ceph status" command on the running node.

I set "osd_pool_default_size = 2" (min_size=1) on all pools, so I thought that each copy will reside on each node. Which means that if 1 node goes down the second one will be still operational.

I think my assumptions are wrong, but I could not find the explanation why.

Thanks Jiri
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux