Dear Gaurav, Ceph works best with more hardware. It is not really designed for small scale setups. Of course small setups can work for a PoC or testing, but I would not advise this for production. If you want to proceed however, have a good look the manuals or this mailinglist archive and do invest some time to understand the logic and workings of ceph before working or ordering hardware At least you want: - 3 monitors, preferable on dedicated servers - Per disk you will be running an ceph-osd instance. So a host with 2 disks will run 2 osd instances. More OSD process is better performance, but also more memory and cpu usage. - Per default ceph uses a replication factor of 3 (it is possible to set this to 2, but is not advised) - You can not fill up disks to 100%, also data will not distribute even over all disks, expect disks to be filled up (on average) maximum to 60-70%. You want to add more disks once you reach this limit. All on all, with a setup of 3 hosts, with 2x2TB disks, this will result in a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB If speed is required, consider SSD's (for data & journals, or only journals). In you email you mention "compute1/2/3", please note, if you use the rbd kernel driver, this can interfere with the OSD process and is not advised to run OSD and Kernel driver on the same hardware. If you still want to do that, split it up using VMs (we have a small testing cluster where we do mix compute and storage, there we have the OSDs running in VMs) Hope this helps, regards, mart On 08/17/2016 02:21 PM, Gaurav Goyal
wrote:
-- Mart van Santen Greenhost E: mart@xxxxxxxxxxxx T: +31 20 4890444 W: https://greenhost.nl A PGP signature can be attached to this e-mail, you need PGP software to verify it. My public key is available in keyserver(s) see: http://tinyurl.com/openpgp-manual PGP Fingerprint: CA85 EB11 2B70 042D AF66 B29A 6437 01A1 10A3 D3A5 |
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com