Hi all first off, we have yet to start with Ceph (and other clustered file systems other than QFS), therefore please consider me a total newbie w.r.t to Ceph. We are trying to solve disk I/O problems we face and would like to explore if we could utilize our currently underused network more in exchange for more disk performance. We do have a couple of machines at our hands to use, but I would like to learn how well Ceph scales with a large number of systems/disks. In a safe approach, we could use 16 big boxes with 12 3 TB disks inside and explore to use JBOD, hard- or software raid and 10Gb/s Ethernet uplinks. On the other hand we could scale out to about 1500-2500 machines, each with local disks (500GB-1TB) and/or SSDs (60GB) inside. For now I have got two questions concerning this: (a) Would either approach work with O(2000) clients? (b) Would Ceph scale well enough to have O(2000) disks in the background each connected with 1 Gb/s to the network? Has anyone experience with these numbers of hosts or do people use "access" nodes in between which export a Ceph file system via NFS or similar systems? Cheers Carsten PS: As a first step, I think I'll go with 4-5 systems just to get a feel for Ceph, scaling out will be a later exercise ;) -- Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics Callinstrasse 38, 30167 Hannover, Germany phone/fax: +49 511 762-17185 / -17193 https://wiki.atlas.aei.uni-hannover.de/foswiki/bin/view/ATLAS/WebHome -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2044 bytes Desc: S/MIME Cryptographic Signature URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140526/1784a8af/attachment.bin>