I'd def be happy to share what numbers I can get out of it. I'm still a neophyte w/ Ceph, and learning how to operate it, set it up ... etc... My limited performance testing to date has been with "stock" XFS ceph-disk built filesystem for the OSDs, basic PG/CRUSH map stuff - and using "dd" across RBD mounted volumes ... I'm learning how to scale it up, and start tweaking and tuning. If anyone on the list is interested in specific tests and can provide specific detailed instructions on configuration, test patterns, etc ... I"m happy to run them if I can ... We're baking in automation around the Ceph deoployment from fresh build using the Open Crowbar deployment tooling, with a Ceph work load on it. RIght now, modifying the Ceph work load to work across multple L3 rack boundaries in the cluster. Physical servers are Dell R720xd platforms, with 12 spinning (4TB 7200 rpm) data disks, and 2x 10k 600 GB mirrired OS disks. Memory is 128 GB, and dual 6-core HT CPUs. ~~shane On 7/1/15, 5:24 PM, "German Anders" <ganders@xxxxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com