Hello, On Wed, 22 Jun 2016 11:09:46 +1200 Denver Williams wrote: > Hi All > > > I'm planning an Open-stack Private Cloud Deplyment and I'm trying to > Decide what would be the Better Option? > > What would the Performance Advantages/Disadvantages be when comparing a > 3 Node Ceph Setup with 15K/12G SAS Drives in an HP Dl380p G8 Server with > SSDs for Write Cache, compared to something like an HP MSA 2040 10GBe > iSCSI Array, All network connections would be 10GBe. > Very complex question and it's easy to compare apples and oranges here as well. For starters, I have no experience with that (or any similar) SAN and all my iSCSI experiences are purely based on tests, not production environments (and thus performance numbers). I'd also pit non-HP boxes (unless you get a massive discount from them of course) against the SAN, both for cost and design flexibility. And 15k or not, 12Gb/s SAS is overkill in my book for anything but SSDs. That all being said, I'd venture the SAN will win performance wise, it having 4GB HW cache on the RAID controllers can mask RAID6 performance drops and if you'd deploy raid 10's and tiering to SSDs with it that should only get better. There's a reason I deploy my mailbox servers as DRBD Pacemaker cluster pairs and not as with Ceph as backing storage. 3 Ceph storage nodes will give you the capacity of just one due to replication and you incur the latency penalty associated with that as well. Ceph could outgrow and potentially out-perform that SAN (in it's maximum configuration), but clearly you're not looking for that. Ceph also has potentially more resilience, but that's not a performance question either. It would be helpful to put a little more meat on that question, as in: - What are you needs (space, IOPS)? - What are the costs for either solution? (get a quote from HP) Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com