New Cluster - Any requests?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

 

I’ve just finished building a new POC cluster comprised of the following:-

 

4 Hosts in 1 chassis (http://xo4t.mj.am/link/xo4t/6viz8uz/1/m4ArBVlCkksu1TDXks2mhQ/aHR0cDovL3d3dy5zdXBlcm1pY3JvLmNvbS9wcm9kdWN0cy9zeXN0ZW0vNFUvRjYxNy9TWVMtRjYxN0g2LUZUUFRfLmNmbQ) each with the following:-

 

2x Xeon 2620 v2 (2.1Ghz)

32GB Ram

2x Onboard 10GB-T into 10GB switches

10x 3TB WD Red Pro Disks (currently in k3 m3 EC pool, so 55TB usable)

2x 100G S3700 SSD’s for journals and OS

1x 400GB S3700 SSD for SSD cache tier

Ubuntu 14.04.2 (3.16 Kernel)

Running Ceph 87.1

 

Its currently in testing whilst I get iSCSI over RBD working to a state I’m happy with.

 

As a very rough idea of performance from the SSD tier, I’m seeing about 10K writes and 40K reads of 4kb at queue depth of 32. During these bench’s total CPU of each host is at about 80%..... and this is just 1 SSD OSD per host remember.

 

Idle power usage is around 500-600W

 

I’m intending to post some performance numbers of the individual components and RBD performance within the next couple of weeks, but if anybody has any requests for me to carry out any tests or changes whilst it’s in testing please let me know. I’m happy to create new pools and carry out config changes, but nothing that will result in me rebuilding the cluster from scratch.

 

 

Nick


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux