> -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > V.Ranganath > Sent: 12 June 2015 06:06 > To: ceph-users@xxxxxxxxxxxxxx > Subject: New to CEPH - VR@Sheeltron > > Dear Sir, > > I am New to CEPH. I have the following queries: > > 1. I have been using OpenNAS, OpenFiler, Gluster & Nexenta for storage OS. > How is CEPH different from Gluster & Nexenta ? Most of those solutions are based on old style RAID groups (yes, I know ZFS isn't RAID but its similar). Ceph is scale out object based storage. Each disk effectively is controlled by an individual piece of software which together pool together to create a storage cluster. > > 2. I also use LUSTRE for our Storage in a HPC Environment. Can CEPH be > substituted for Lustre ? Possibly, but I'm not best placed to answer this > > 3. What is the minimum capacity of storage (in TB), where CEPH can be > deployed ? What is the typical hardware configuration required to support > CEPH ? Can we use 'commodity hardware' like TYAN - Servers & JBODs to > stack up the HDDs ?? Do you need RAID Controllers or is RAID/LUN built by > the OS ? I believe the minimum amount would probably be around 10GB, but it really only tends to make sense moving from traditional RAID to Ceph when you probably get to around 100TB. > > 4. Do you have any doc. that gives me the comparisons with other Software > based Storage ? There isn't anything that I'm aware of, probably as it's so different that a single comparison is hard. Are there any particular points you are interested in comparing? > > Thanks & Regards, > > V.Ranganath > VP - SI&S Division > Sheeltron Digital Systems Pvt. Ltd. > Direct: 080-49293307 > Mob:+91 88840 54897 > E-mail: ranga@xxxxxxxxxxxxx > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com