Re: What HBA to choose? To expand or not to expand?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jake,

On 19-09-17 15:14, Jake Young wrote:
> Ideally you actually want fewer disks per server and more servers.
> This has been covered extensively in this mailing list. Rule of thumb
> is that each server should have 10% or less of the capacity of your
> cluster.

That's very true, but let's focus on the HBA.

> I didn't do extensive research to decide on this HBA, it's simply what
> my server vendor offered. There are probably better, faster, cheaper
> HBAs out there. A lot of people complain about LSI HBAs, but I am
> comfortable with them.

Given a configuration our vendor offered it's about LSI/Avago 9300-8i
with 8 drives connected individually using SFF8087 on a backplane (e.g.
not an expander). Or, 24 drives using three HBAs (6xSFF8087 in total)
when using a 4HE SuperMicro chassis with 24 drive bays.

But, what are the LSI complaints about? Or, are the complaints generic
to HBAs and/or cryptic CLI tools and not LSI specific?

> There is a management tool called storcli that can fully configure the
> HBA in one or two command lines.  There's a command that configures
> all attached disks as individual RAID0 disk groups. That command gets
> run by salt when I provision a new osd server.

The thread I read was about Areca in JBOD but still able to utilise the
cache, if I'm not mistaken. I'm not sure anymore if there was something
mentioned about BBU.

>
> What many other people are doing is using the least expensive JBOD HBA
> or the on board SAS controller in JBOD mode and then using SSD
> journals. Save the money you would have spent on the fancy HBA for
> fast, high endurance SSDs.

Thanks! And obviously I'm very interested in other comments as well.

Regards,
Kees

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux