Hello again Brendan,
We have recently deployed the following environment for a customer:
* Dell EMC AX-100 storage through fibre channel connections;
* 2 DELL PE-2800 servers with 1 QLogic HBA each, direct attached to the
storage;
* Around 400 GB of raw disk space on the AX100's SATA disks;
* Red Hat Enterprise Linux v3 Update 7 (this was before the release of U8);
* Red Hat Cluster Suite HA solution installed.
This environment has yet to prove to be effective, but when it goes to
production, it will support about 100 concurrent users accessing
Dataflex files and applications, both through Telnet sessions and
through file sharing using Samba (Visual Dataflex running on top of
Windows machines).
I believe the performance will be quite acceptable, even though the
storage uses SATA disks.
I've seen some storage solutions using even ATA HDDs with very good
performance. On such example is Apple's X Server RAID storage, which
uses ATA (not SATA nor SCSI) disks. It is even certified for Red Hat
Enterprise Linux (and for RHCS too, I believe). It uses copper fibre
channel connections, though.
In my opinion the secret with these cheaper solutions is on storage
processors. If they hold a reasonable amount of cache memory, they can
compensate eventual latencies with the hard drives.
We have also had a good degree of success using Dell EMC's CX-300 and
CX-500, using fibre channel connections.
Finally, I think iSCSI is a good thing if you don't need high
performance I/O. I previously suggested you iSCSI in place of Dell's
PV220S (which lacks write cache in Cluster mode) because an iSCSI
solution with full cache functionality would give you similar or better
performance than the PV220S solution.
With iSCSI, we haven't seen very high performance I/O rates. In one
case, a customer employed an IBM storage (don't remember the model),
which was accessed via iSCSI using a Cisco iSCSI-to-fibrechannel switch.
Even with trunking enabled on the iSCSI NICs on the servers, performance
was not outstanding as the storage itself would give if the servers were
direct attached or connected through an SAN. We used in this case
standard NICs and Linux's traditional iscsi-initiator-utils (a
sourceforge project), so we could have better performance if we employed
dedicated iSCSI NICs (such as the QLogic models).
Maybe someone will disagree with me about a "not so good performance
with iSCSI", but we (our company) see iSCSI as a good solution for
lowering costs when you have a reasonable amount of servers and don't
want to invest on a full redundant SAN solution (a QLogic HBA card costs
around USD 2000.00 here in Brazil, for example).
I hope this gives you some extra information about storage choices for
your cluster deployments, ok?
Best regards,
Celso.
Brendan Heading escreveu:
Celso K. Webber wrote:
Maybe a Dell|EMC AX-100 using iSCSI could a better choice with a not
so high price tag.
Sorry for the long message, I believe this information can be useful
to others.
Celso,
Please don't apologize, your reply was very informative and you've
probably just saved me a big pile of cash :) I've never been happy with
the PERC RAID controller that I currently have, and as part of moving to
the cluster I intend to use Adaptec stuff all the way. However it sounds
like that won't fly with the PowerVault in any case.
I saw the EMC AX-100 but I've got one significant problem with it - the
disk storage inside is SATA. The server that I'm dealing with is
essentially a build server used to do regular parallel builds, and it
may be simultaneously in use at any one time by up to 20-30 users. I've
been looking at iSCSI but I find it very strange that nobody seems to
sell iSCSI-to-SCSI boxes.
Do you think an SATA-based array is likely to hold up under these
conditions ?
A less important reason here is that I've got a bunch of 15000rpm 150GB
& 300GB SCSI320 disks connected to the Dell boxen at the moment that I
was hoping to reuse. It would be nice to reuse them, but I don't have to.
Thanks again,
Brendan
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
*Celso Kopp Webber*
celso@xxxxxxxxxxxxxxxx <mailto:celso@xxxxxxxxxxxxxxxx>
*Webbertek - Opensource Knowledge*
(41) 8813-1919
(41) 3284-3035
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster