Re: What HBA to choose? To expand or not to expand?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Tue, Sep 19, 2017 at 7:34 AM Kees Meijs <kees@xxxxxxxx> wrote:
Hi list,

It's probably something to discuss over coffee in Ede tomorrow but I'll
ask anyway: what HBA is best suitable for Ceph nowadays?

In an earlier thread I read some comments about some "dumb" HBAs running
in IT mode but still being able to use cache on the HBA. Does it make
sense? Or, is this dangerous similar to RAID solutions* without BBU?


Yes, that would be dangerous without a BBU. 




(On a side note, we're planning on not using SAS expanders any-more but
to "wire" each individual disk e.g. using SFF8087 per four disks
minimising risk of bus congestion and/or lock-ups.)

Anyway, in short I'm curious about opinions on brand, type and
configuration of HBA to choose.

Cheers,
Kees

*: apologies for cursing.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

It depends a lot on how many disks you want per server.

Ideally you actually want fewer disks per server and more servers. This has been covered extensively in this mailing list. Rule of thumb is that each server should have 10% or less of the capacity of your cluster. 

In my cluster I use the LSI 3108 HBA with 4GB of RAM, BBU and 9 3.5" 2TB disks in 2U servers. Each disk is configured as a RAID0 disk group so I can use the write back cache. I chose to use the HBA for write coalescing rather than using SSD journals. It isn't as fast as SSD journals could be, but it is cheaper and simpler to install and maintain. 

I didn't do extensive research to decide on this HBA, it's simply what my server vendor offered. There are probably better, faster, cheaper HBAs out there. A lot of people complain about LSI HBAs, but I am comfortable with them. 

There is a management tool called storcli that can fully configure the HBA in one or two command lines.  There's a command that configures all attached disks as individual RAID0 disk groups. That command gets run by salt when I provision a new osd server. 

What many other people are doing is using the least expensive JBOD HBA or the on board SAS controller in JBOD mode and then using SSD journals. Save the money you would have spent on the fancy HBA for fast, high endurance SSDs. 

Jake 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux