Re: Dell Ceph Hardware recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michael,


I wouldn't be concerned with SAS expanders - so long as you've got enough bandwidth to the HBA / RAID controller?


The main consideration with the SSDs is the ratio to disk. When you loose an SSD all the OSDs journalling to that will be inconsistent, effectively off-lining them. Too many disks can just hammer the SSD making it the bottleneck.


Even if the intel 720 SSDs can provide exceptional performance, more than 8 seems like it could be risky / impact performance?


You could consider cache tiers instead, make two separate pools, one comprising of SSD the other just of disk then flush cold objects from the SSDs.


http://docs.ceph.com/docs/master/rados/operations/cache-tiering/



Kind Regards,

Alex.


From: Michael Barkdoll <mabarkdoll@xxxxxxxxx>
Sent: 11 February 2016 15:05
To: Alex Leake
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Dell Ceph Hardware recommendations
 
Thanks Alex,

Dell provided me with their current recommended cloud architecture for a redhat openstack/ceph deployment.

The R730XD is also recommended by Dell.  I'd save about $2000 per server, if I purchase the Dell PowerEdge T630 compared to the R730XD.  The RAID controllers are the same, I'm just curious if there would be any issue with the SAS expansion backplate.  An old InkTank deployment guide from 2013 recommended against using SAS expansion backplates.  Yet, RAID controllers are now 12Gb so I'm not certain if this is still an issue?

Also, I was thinking about using a Intel 750 Series AIC 800GB PCI-Express 3.0 x4 MLC Internal Solid State Drive (SSD) SSDPEDMW800G4X1 800GB model rather than the 1.2TB model for caching.  Does anyone know how many hard disks I can use with it?  With the 1.2TB model, I may have read some where that it supports around 12-16 disks.

I'm still leaning toward using the T630 with maybe a PERC H730 for cost savings.  I'm curious if the PERC H730 offers any advantage of the PERC H330 for Ceph?  Thanks so much everyone for the much needed feedback!


Michael Barkdoll

On Thu, Feb 11, 2016 at 7:54 AM, Alex Leake <A.M.D.Leake@xxxxxxxxxx> wrote:
Hello Michael​,

I maintain a small Ceph cluster at the University of Bath, our cluster consists of:

Monitors:
3 x Dell PowerEdge R630

 - 2x Intel(R) Xeon(R) CPU E5-2609 v3
 - 64GB RAM
 - 4x 300GB SAS (RAID 10)


OSD Nodes:
6 x Dell PowerEdge R730XD & MD1400 Shelves

 - 2x Intel(R) Xeon(R) CPU E5-2650
 - 128GB RAM
 - 2x 600GB SAS (OS - RAID1)
 - 2x 200GB SSD (PERC H730)
 - 14x 6TB NL-SAS (PERC H730)
 - 12x 4TB NL-SAS (PERC H830 - MD1400)


Please let me know if you want any more info.

In my experience thus far, I've found this ratio is not useful for cache tiering etc - the SSDs are in a separate pool.

If I could start over, I'd go for fewer OSDs / host - and no SSDs (or a much better ratio - like 4:1).


Kind Regards,
Alex.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux