Re: Large Linux RAID System (lots of drives)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 31/10/18 19:21, Carsten Aulbert wrote:
Hi Adam,

On 10/31/18 04:12, Adam Goryachev wrote:
My question is what is the best way to get 16 x SSD's connected in a
single system? Would I just get a 16 SATA port adapter like this:

https://www.newegg.com/global/au-en/Product/Product.aspx?Item=N82E16816103121&cm_re=adaptec-_-16-103-121-_-Product

Adaptec 1000 2288200-R (1000-16e ) 8-Lane PCIe Gen3 Low-Profile, MD2
SATA / SAS 12 Gb/s PCIe Gen3 Host Bus Adapter
  4 (x4) SFF-8644 External Connectors

Or is there other (potentially more reliable under linux and/or cheaper)
that could do the same thing?
For the past years we moved away from "intelligent" RAID controllers to
dumb HBAs (just as you propose) and there we almost exclusively go with
LSI/Broadcom/Avago/whatever their current name is nowadays for the time
being as they just "work" under Linux. I think the "Broadcom SAS
9305-16i" would fit the bill if you wanted to have internal fan-out
cables. Local mail-order pricing here in Germany is around AUD 630.
Perfect, that helps a lot. Looks like this one would be better (cheaper but still good):
https://www.newegg.com/Product/Product.aspx?Item=N82E16816118249&ignorebbr=1
They usually come as either "IR" (integrated RAID?) or "IT" (?) where
the latter is the dumb HBA version you really want, but flashing to IT
mode is usually not that hard.

 From there you have the choice of using fan-out cables, i.e. directly
going with 4 SATA ports per port on the card (like the one you linked
to), e.g.
https://www.microsatacables.com/external-mini-sas-hd-sff8644-to-4-x-sata-cable-1-meter-sff-754-1m

or you could go with a chassis which does the conversion for you via a
SAS backplane - we usually go with the latter as it makes it easier to
hot-swap components from racked equipment.
I know I've used internal "boxes" to convert from 3 x 5.25" bays to multiple 3.5" bays and/or 2.5" bays, but I seem to be having a lot of trouble finding the right terms to search for now. Could you point me in the direction of something that you have used in the past? Trying to fit 16x2.5" drives into a single system might be a squeeze otherwise.
Apart from that, depending on the anticipated load of the system, you
may want to ensure you have enough CPU power in the box as whatever way
you will "RAID" the devices, it will take a bite out of the CPU(s) -
unless you go with RAID0 (don't!) or RAID10[1]. Also, if you plan on
having (multiple) 10Gb/s NICs moving data in/out of the box.
We won't have 10G ethernet here, just a single 1G ethernet. It is only our DR system, so crappy performance is not an issue for a few days or so while we source better/faster equipment to get back to a fully working/functional system.
Does this help a bit?
A lot, thank you!
[1] Depending on usage scenario, we usually go with simple md-raid1/10/6
or with ZFS raidz/raidz2, where the latter can be even harder on CPU.

I'll be using RAID6 across the 16 x 800G drives to give 11.2TB usable space, with a possible 11.4TB using 7 x 1.9TB SSD in RAID5 on the primary servers (so we would need to make sure we don't provision more than the 11.2TB on the backup server).

Regards,
Adam



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux