Hardware configuration for OSD in a new all flash Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

We'd liked to setup a Ceph Cluster for IOPS-optimized Workloads. Our needs are for object storage (S3A for Spark, Boto for Python notebooks,…), RBD and, eventually, CephFS workloads.

Trough different readings for IOPS-optimized Ceph Workloads, we think to buy this kind of servers for the OSD:

·         Dell R740 with Chassis with up to 16 X 2.5” drive

·         2 x Intel® Xeon® Silver 4116 2.1G, 12C/24T, 9.6GT/s, 16M Cache, Turbo, HT (85W) DDR4-2400

·         8 x 16GB RDIMM, 2666MT/s, Dual Rank

·         HBA330 Controller, 12Gbps Adapter, Low Profile

·         16 x 1.92TB SSD SAS Mix Use 12Gbps 512n 2.5in Hot-plug Drive, PX05SV,3 DWPD,10512 TBW

·         OS disk = BOSS controller card + with 2 M.2 Sticks 120G (RAID 1),FH

·         Broadcom 5720 QP 1Gb Network Daughter Card (Configuration interface)

·         Mellanox ConnectX-3 Pro Dual Port 40 GbE QSFP+ PCIE Adapter Full Height (Cluster and Client Interface)

We will used the latest stable Luminous Ceph release support by RedHat. Therefore, we will used the XFS filesystem with the journal co-located with OSDs on the same SSD.

 

We will begin with 9 OSD servers and we will use a 3X or, maybe, a 2X replication factor since it is a all flash Ceph cluster.

 

What do you think of this configuration?

 

Réal Waite

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux