Re: What HBA to choose? To expand or not to expand?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Wed, Sep 20, 2017 at 5:31 AM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:



We use these :
NVDATA Product ID              : SAS9207-8i
Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308
PCI-Express Fusion-MPT SAS-2 (rev 05)

Does someone by any chance know how to turn on the drive identification
lights?

storcli64 /c0/e8/s1 start locate

Where c is the controller id, e is the enclosure id and s is the drive slot

Look for the PD List section in the output to see the enclosure id / slot id list.

 storcli64 /c0 show 





-----Original Message-----
From: Jake Young [mailto:jak3kaj@xxxxxxxxx]
Sent: dinsdag 19 september 2017 18:00
To: Kees Meijs; ceph-users@xxxxxxxx
Subject: Re: What HBA to choose? To expand or not to
expand?


On Tue, Sep 19, 2017 at 9:38 AM Kees Meijs <kees@xxxxxxxx> wrote:


        Hi Jake,

        On 19-09-17 15:14, Jake Young wrote:
        > Ideally you actually want fewer disks per server and more
servers.
        > This has been covered extensively in this mailing list. Rule of
thumb
        > is that each server should have 10% or less of the capacity of
your
        > cluster.

        That's very true, but let's focus on the HBA.

        > I didn't do extensive research to decide on this HBA, it's simply
what
        > my server vendor offered. There are probably better, faster,
cheaper
        > HBAs out there. A lot of people complain about LSI HBAs, but I am
        > comfortable with them.

        Given a configuration our vendor offered it's about LSI/Avago
9300-8i
        with 8 drives connected individually using SFF8087 on a backplane
(e.g.
        not an expander). Or, 24 drives using three HBAs (6xSFF8087 in
total)
        when using a 4HE SuperMicro chassis with 24 drive bays.

        But, what are the LSI complaints about? Or, are the complaints
generic
        to HBAs and/or cryptic CLI tools and not LSI specific?


Typically people rant about how much Megaraid/LSI support sucks. I've
been using LSI or MegaRAID for years and haven't had any big problems.

I had some performance issues with Areca onboard SAS chips (non-Ceph
setup, 4 disks in a RAID10) and after about 6 months of troubleshooting
with the server vendor and Areca support they did patch the firmware and
resolve the issue.




        > There is a management tool called storcli that can fully
configure the
        > HBA in one or two command lines.  There's a command that
configures
        > all attached disks as individual RAID0 disk groups. That command
gets
        > run by salt when I provision a new osd server.

        The thread I read was about Areca in JBOD but still able to utilise
the
        cache, if I'm not mistaken. I'm not sure anymore if there was
something
        mentioned about BBU.


JBOD with WB cache would be nice so you can get smart data directly from
the disks instead of having interrogate the HBA for the data.  This
becomes more important once your cluster is stable and in production.

IMHO if there is unwritten data in a RAM chip, like when you enable WB
cache, you really, really need a BBU. This is another nice thing about
using SSD journals instead of HBAs in WB mode, the journaled data is
safe on the SSD before the write is acknowledged.




        >
        > What many other people are doing is using the least expensive
JBOD HBA
        > or the on board SAS controller in JBOD mode and then using SSD
        > journals. Save the money you would have spent on the fancy HBA
for
        > fast, high endurance SSDs.

        Thanks! And obviously I'm very interested in other comments as
well.

        Regards,
        Kees

        _______________________________________________
        ceph-users mailing list
        ceph-users@xxxxxxxxxxxxxx
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux