Michael,
I wouldn't be concerned with SAS expanders - so long as you've got enough bandwidth to the HBA / RAID controller?
The main consideration with the SSDs is the ratio to disk. When you loose an SSD all the OSDs journalling to that will be inconsistent, effectively off-lining them. Too many disks can just hammer the SSD making it the bottleneck.
Even if the intel 720 SSDs can provide exceptional performance, more than 8 seems like it could be risky / impact performance?
You could consider cache tiers instead, make two separate pools, one comprising of SSD the other just of disk then flush cold objects from the SSDs.
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/
Kind Regards, Alex. From: Michael Barkdoll <mabarkdoll@xxxxxxxxx>
Sent: 11 February 2016 15:05 To: Alex Leake Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: Dell Ceph Hardware recommendations Thanks Alex,
Dell provided me with their current recommended cloud architecture for a redhat openstack/ceph deployment.
The R730XD is also recommended by Dell. I'd save about $2000 per server, if I purchase the Dell PowerEdge T630 compared to the R730XD. The RAID controllers are the same, I'm just curious if there would be any issue with the SAS expansion backplate.
An old InkTank deployment guide from 2013 recommended against using SAS expansion backplates. Yet, RAID controllers are now 12Gb so I'm not certain if this is still an issue?
Also, I was thinking about using a Intel 750 Series AIC 800GB PCI-Express 3.0 x4 MLC Internal Solid State Drive (SSD) SSDPEDMW800G4X1 800GB model rather than the 1.2TB model for caching. Does anyone know how many hard disks I can use with it? With the
1.2TB model, I may have read some where that it supports around 12-16 disks.
I'm still leaning toward using the T630 with maybe a PERC H730 for cost savings. I'm curious if the PERC H730 offers any advantage of the PERC H330 for Ceph? Thanks so much everyone for the much needed feedback!
Michael Barkdoll
On Thu, Feb 11, 2016 at 7:54 AM, Alex Leake
<A.M.D.Leake@xxxxxxxxxx> wrote:
Hello Michael, |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com