Re: What HBA to choose? To expand or not to expand?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marc Roos wrote:

> We use these :
> NVDATA Product ID              : SAS9207-8i
> Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308
> PCI-Express Fusion-MPT SAS-2 (rev 05)
> 
> Does someone by any chance know how to turn on the drive identification
> lights?

Tested with a MegaRAID SAS 2108 / DELL H700 :

megacli -PDList -a0

get the enclosure and drive number :
Enclosure Device ID: 32
Slot Number: 0

megacli -PdLocate -start -physdrv '[32:0]' -a0

> 
> -----Original Message-----
> From: Jake Young [mailto:jak3kaj-Re5JQEeQqe8AvxtiuMwx3w@xxxxxxxxxxxxxxxx]
> Sent: dinsdag 19 september 2017 18:00
> To: Kees Meijs; ceph-users-Qp0mS5GaXlQ@xxxxxxxxxxxxxxxx
> Subject: Re:  What HBA to choose? To expand or not to
> expand?
> 
> 
> On Tue, Sep 19, 2017 at 9:38 AM Kees Meijs
> <kees-FaqLbeXgz6Y@xxxxxxxxxxxxxxxx> wrote:
> 
> 
> Hi Jake,
> 
> On 19-09-17 15:14, Jake Young wrote:
> > Ideally you actually want fewer disks per server and more
> servers.
> > This has been covered extensively in this mailing list. Rule of
> thumb
> > is that each server should have 10% or less of the capacity of
> your
> > cluster.
> 
> That's very true, but let's focus on the HBA.
> 
> > I didn't do extensive research to decide on this HBA, it's simply
> what
> > my server vendor offered. There are probably better, faster,
> cheaper
> > HBAs out there. A lot of people complain about LSI HBAs, but I am
> > comfortable with them.
> 
> Given a configuration our vendor offered it's about LSI/Avago
> 9300-8i
> with 8 drives connected individually using SFF8087 on a backplane
> (e.g.
> not an expander). Or, 24 drives using three HBAs (6xSFF8087 in
> total)
> when using a 4HE SuperMicro chassis with 24 drive bays.
> 
> But, what are the LSI complaints about? Or, are the complaints
> generic
> to HBAs and/or cryptic CLI tools and not LSI specific?
> 
> 
> Typically people rant about how much Megaraid/LSI support sucks. I've
> been using LSI or MegaRAID for years and haven't had any big problems.
> 
> I had some performance issues with Areca onboard SAS chips (non-Ceph
> setup, 4 disks in a RAID10) and after about 6 months of troubleshooting
> with the server vendor and Areca support they did patch the firmware and
> resolve the issue.
> 
> 
> 
> 
> > There is a management tool called storcli that can fully
> configure the
> > HBA in one or two command lines.  There's a command that
> configures
> > all attached disks as individual RAID0 disk groups. That command
> gets
> > run by salt when I provision a new osd server.
> 
> The thread I read was about Areca in JBOD but still able to utilise
> the
> cache, if I'm not mistaken. I'm not sure anymore if there was
> something
> mentioned about BBU.
> 
> 
> JBOD with WB cache would be nice so you can get smart data directly from
> the disks instead of having interrogate the HBA for the data.  This
> becomes more important once your cluster is stable and in production.
> 
> IMHO if there is unwritten data in a RAM chip, like when you enable WB
> cache, you really, really need a BBU. This is another nice thing about
> using SSD journals instead of HBAs in WB mode, the journaled data is
> safe on the SSD before the write is acknowledged.
> 
> 
> 
> 
> >
> > What many other people are doing is using the least expensive
> JBOD HBA
> > or the on board SAS controller in JBOD mode and then using SSD
> > journals. Save the money you would have spent on the fancy HBA
> for
> > fast, high endurance SSDs.
> 
> Thanks! And obviously I'm very interested in other comments as
> well.
> 
> Regards,
> Kees
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@xxxxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux