Re: Low cost storage for clusters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Celso K. Webber wrote:
Hello again Brendan,

Celso,

Thanks again for the informative reply.

<snip>

I believe the performance will be quite acceptable, even though the storage uses SATA disks.

That's the key point. At the moment my users are used to RAID5 on a PERC4 controller, which, let's face it, is not exactly stellar. If I buy a few extra drives in order to do RAID 10, and use dedicated iSCSI HBAs, that should do nicely.

Timothy Lin followed up:

>I'd think twice about getting an AX-100
>it's an entry level SATA-1 only EMC raid (No NCQ support)
>I got one for a non-clustering environment and the speed I get out of >it and the speed isn't much faster than a decent single SCSI HDD.

I suspect that I'd be looking to use high-end SATA disks (eg WD Raptor) with NCQ enabled so this is a bit of a bummer.

Has anyone got any experience with the EMC AX-150 ? It is the current machine which Dell are offering. Since it is SATA II I guess it should do NCQ ?

I know that with a SCSI-based array, the number of cluster servers I can connect to the array are limited by the number of SCSI ports on the array enclosure. With iSCSI, is it simply a matter of connecting lots of servers using a regular gige ethernet switch ?

On the subject of ethernet switches, are they all made equal ? Obviously I know that some are managed, but what are you getting when you pay large amounts of money for fairly ordinary looking switches ?

In my opinion the secret with these cheaper solutions is on storage processors. If they hold a reasonable amount of cache memory, they can compensate eventual latencies with the hard drives.

The AX-150 has 1GB cache. That sounds OK :)

Finally, I think iSCSI is a good thing if you don't need high performance I/O. I previously suggested you iSCSI in place of Dell's PV220S (which lacks write cache in Cluster mode) because an iSCSI solution with full cache functionality would give you similar or better performance than the PV220S solution.

I'm happy enough that iSCSI is acceptable, I don't think I will be able to justify a full fibre-channel SAN. Certainly I'd expect it to do as well as straight SCSI320 given that it has 3x the raw bandwidth (even accounting for the TCPIP overhead).

With iSCSI, we haven't seen very high performance I/O rates. In one case, a customer employed an IBM storage (don't remember the model), which was accessed via iSCSI using a Cisco iSCSI-to-fibrechannel switch. Even with trunking enabled on the iSCSI NICs on the servers, performance was not outstanding as the storage itself would give if the servers were direct attached or connected through an SAN. We used in this case standard NICs and Linux's traditional iscsi-initiator-utils (a sourceforge project), so we could have better performance if we employed dedicated iSCSI NICs (such as the QLogic models).

I would probably plan on getting iSCSI HBAs at least for the critical machines in the cluster. I'm planning to allow a few lower-priority machines access to the GFS cluster filesystem too, but those machines will probably just use a regular NIC.

Thanks again,

Brendan

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux