Re: Ceph, SSD, and NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 01, 2015 at 10:01:03PM -0400, J David wrote:
> So, do medium-sized IT organizations (i.e. those without the resources
> to have a Ceph developer on staff) run Hammer-based deployments in
> production successfully?
I'm not sure if I count, given that I'm now working at DreamHost as the
in-house Ceph/RGW developer, but as it gave me my background on Ceph

At one of my prior positions, I did the prototype & production
deployment of our (small) Ceph cluster. Usage predominantly via RGW/S3,
but a few RBD volumes exported via iSCSI because it was convenient.

As a very small non-profit, we had extremely little budget, and the
hardware reflects that. The hardware also ran VMs, which shared the
SSDs, but were otherwise not using Ceph except for a very small number
of RBD volumes.

Per-node Hardware for the production cluster was:
Supermicro 2U twin (X9DRT-HF+ boards)
Specs for each side of the twin:
Dual Xeon E5-2650
256GB RAM (started at 64GB, grew over time for the VMs)
4x 4TB SAS
2x 512GB Samsung 840 PRO
(later upgraded with 10Gbit SFP interconnect)
Initial build date August 2013.

The development cluster was built about 8 months earlier from scraps &
spares.

-- 
Robin Hugh Johnson
Gentoo Linux: Developer, Infrastructure Lead
E-Mail     : robbat2@xxxxxxxxxx
GnuPG FP   : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux