Re: Basic questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi John,

Thanks for the responses. 

For (a), I remember reading somewhere that one can only run a max of 1 monitor/node, I assume that that implies the single monitor process will be responsible for ALL ceph clusters on that node, correct?

So (b) isn't really a Ceph issue, that's nice to know. Any recommendations on the minimum kernel/glibc version and min RAM size requirements where Ceph can be run on a single client in native mode? Reason I ask this is in a few deployment scenarios (especially non-standard like telco platforms), hardware gets added gradually, so its more important to be able to scale the cluster out gracefully. I actually see Ceph as an alternative to SAN, using JBODs from machines to create a larg(ish) storage cluster. Plus, usually, the clients would probably be running on the same hardware as the OSD/MON, because space on the chassis is at a premium.

(d) I was thinking about single node failure scenarios, with 3 nodes, wouldn't a failure of 1 node cause PAXOS to not work?



Thanks,
Hari





On Fri, Jul 26, 2013 at 10:00 AM, John Wilkins <john.wilkins@xxxxxxxxxxx> wrote:
(a) Yes. See http://ceph.com/docs/master/rados/configuration/ceph-conf/#running-multiple-clusters
and http://ceph.com/docs/master/rados/deployment/ceph-deploy-new/#naming-a-cluster
(b) Yes. See http://wiki.ceph.com/03FAQs/01General_FAQ#How_Can_I_Give_Ceph_a_Try.3F
 Mounting kernel modules on the same node as Ceph Daemons can cause
older kernels to deadlock.
(c) Someone else can probably answer that better than me.
(d) At least three. Paxos requires a simple majority, so 2 out of 3 is
sufficient. See
http://ceph.com/docs/master/rados/configuration/mon-config-ref/#background
particularly the monitor quorum section.

On Wed, Jul 24, 2013 at 4:03 PM, Hariharan Thantry <thantry@xxxxxxxxx> wrote:
> Hi folks,
>
> Some very basic questions.
>
> (a) Can I be running more than 1 ceph cluster on the same node (assume that
> I have no more than 1 monitor/node, but storage is contributed by one node
> into more than 1 cluster)
> (b) Are there any issues with running Ceph clients on the same node as the
> other Ceph storage cluster entities (OSD/MON?)
> (c) Is the best way to access Ceph storage cluster in native mode by
> multiple clients through hosting a shared-disk filesystem on top of the RBD
> (like OCFS2?). What if these clients were running inside VMs? Could one then
> create independent partitions on top of rbd and give a partition to each of
> the VMs?
> (d) Isn't the realistic minimum for # of monitors in a cluster at least 4
> (to guard against one failure?)
>
>
> Thanks,
> Hari
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



--
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux