Re: 2-Node Cluster - possible scenario?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quorum can be achieved with one monitor node (for testing purposes this would be OK, but of course it is a single point of failure) however the default for the OSD nodes  is three way replication (can be changed) but easier to set up three OSD nodes to start with and one monitor node. For your case the monitor node would not need to be very powerful and a lower spec system could be used allowing your previously suggested mon node to be used instead as a third OSD node. 

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Hermann Himmelbauer
Sent: Monday, October 26, 2015 12:17 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject:  2-Node Cluster - possible scenario?

Hi,
In a little project of mine I plan to start ceph storage with a small setup and to be able to scale it up later. Perhaps someone can give me any advice if the following (two nodes with OSDs, third node with Monitor only):

- 2 Nodes (enough RAM + CPU), 6*3TB Harddisk for OSDs -> 9TB usable space in case of 3* redundancy, 1 Monitor on each of the nodes
- 1 extra node that has no OSDs but runs a third monitor.
- 10GBit Ethernet as storage backbone

Later I may add more nodes + OSDs to expand the cluster in case more storage / performance is needed.

Would this work / be stable? Or do I need to spread my OSDs to 3 ceph nodes (e.g. in order to achive quorum). In case one of the two OSD nodes fail, would the storage still be accessible?

The setup should be used for RBD/QEMU only, no cephfs or the like.

Any hints are appreciated!

Best Regards,
Hermann

--
hermann@xxxxxxx
PGP/GPG: 299893C7 (on keyservers)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux