On 8/30/2015 4:59 AM, Adrian Sevcenco wrote:
On 08/30/2015 12:02 PM, Mike Mohr wrote:
In my experience the mass market HBAs and RAID cards typically do support
only 8 or 16 drives. For the internal variety in a standard rack-mount
server you'll usually see either 2 or 4 iPass cables (each of which support
4 drives) connected to the backplane. The marketing material you've
referenced has a white lie in it: supporting more than 16 drives on a
single card is very likely only possible with an additional SAS expander
board. I believe Supermicro does sell some pre-configured systems with
ok, then i should give a little detail : the purpose was to have an 1U
server as a head of a JBOD chassis that have 2 SAS backplanes.
The connection would be a simple SAS cascade to the backplanes.
such hardware, but expect the throughput to fall through the floor if you
use such hardware.
why? what is the difference between the silicon from a HBA card and the same silicon
on motherboard?
I'm sure he's referring to what is essentially lane sharing. A SAS
expander in many ways is like an ethernet switch. You have 8 lanes
coming off your SAS3008, 4 each in the SFF8087 connector or 8 individual
SATA like sockets on the motherboard. You can plug any number of these
into a host port on a SAS expander and you then have n*6Gbit of
bandwidth to the expander from the host. Then you plug targets and/or
additional expanders into the downstream ports. Everything on the
downstream ports has to share the bandwidth so you can run into a wall
if you try to push to much bandwidth to to many devices at once. In
practice though it is not usually a problem with a 2-3x
over-subscription of lanes with HDD's. You will see it though if you
are really pushing allot of SSD's.
For example I have 32 1 and 2 TB SATA disks in 2 separate external
enclosures. The enclosures are daisy chained to a single 4 lane 3Gbit
port so I have a theoretical max of 12Gbit to use. What I do to test is
simply use dd to write zeros simultaneously to each one of the drives.
DD is able to write at the full speed of the drives until I get enough
of them going that the total throughput hits around 900MB/s. So there
is some overhead of the switching and whatnot but it is not really bad
in practice. I would just go in not expecting to be able to exceed
80-85% of your upstream link speed.
The above example is using a pcie card but I have done the same thing
using a built in. What support may be alluding to is that the SFF8087 -
SFF8088 slot adapters are normally built with the SFF8088 as a host port
and the SFF8087's as a target meaning they are meant to be used in a
JBOD. They do sell the reverse like what you are looking for but they
are generally more expensive and harder to find.
There is no reason you can't do what it is your talking about as long as
you buy the proper hardware. I would suggest though if you are looking
at using an external slot adapter it may just be cheaper and easier to
buy a SAS2008 based pcie card with external ports. Or if your JBOD does
not have a 6G expander grab a 3G card for $30-40.
The reason of my post is also to understand why is/is not possible..
Thank you,
Adrian
Bottom line: the Supermicro application engineer knows what he's talking
about.
On Sun, Aug 30, 2015 at 1:13 AM, Adrian Sevcenco <Adrian.Sevcenco@xxxxxxx>
wrote:
Hi guys! Unfortunately there is no offtopic list but the subject
is somehow related to centos as the OS is/will be centos :)
So, under this thin cover i ask :
Is it possible that for a SAS controler like LSI 3008 that in specs
says that : "This high-performance I/O controller supports T-10
data protection model and optical support, PCIe hot plugging,
and up to 1,000 connected devices" in a vendor implementation
(motherboard integrated) to support only 8 (or 16) devices?
The technical support from the OEM told me that "the onboard
SAS controller maximum amount of supported harddrives is 16pcs"
and "if you are planning on using more than 16 drives then you
have to use a PCI-E card based SAS controller with external ports
(which Supermicro does not sell)"
and both statements sound insane to me!
First, because the specs for 3008 says something else and i dont know
how one can artificially reduce the number of supported hdds
(beside the firmware - but why would one do that?) and
the second statement is just hogwash as the externl/internal status of
the ports have nothing to do with the sas cascading and the number of
devices supported!! (and of course is really cheap to convert an internal
port to an external port with a bracket)
So, i ask you guys that have more knowledge and expertise: was this
Senior Application Engineer that answered me a total incompetent?
Thank you!
Adrian
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos