sorry I've been away from the list and only getting to this 1-month old
thread now...
Brendan, I have a pv220s which I used for a GFS cluster last year with
disasterous consequences. Performance was terrible for multiple concurrent
users (one of the chief thing you are worried about in selecting SCSI
over SATA in the first place). In addition, the support I got from Dell,
while attentive, ended after 3 months with "we do not support using the
pv220s in an active-active linux cluster". This is after I had reverted
out of GFS and was using linux-ha to do failover and getting SCSI
reservation errors which led to data loss...
I have since moved on to iSCSI as a few people on the list suggested you
do. Instead of the Dell/EMC box most people were talking about I went with
a Promise vtrak M300i. as far as I could see there were only a couple of
minor differences and the promise box was less than half the price when
fully stocked with SATA drives (because Dell totally rips you off on the
price of the drives). It supports SATA II and NCQ.
so far performance has been just as good as with the pv220s in clustered
config (I am only using 10K drives in the 220 though). I have just bought
a couple of Qlogic HBAs (in the US $500 instead of the $2K someone
mentioned in Brazil) but have yet to test them. I also bought a second
enclosure and am hoping to use lvm mirroring and multipathing as soon as
its good to go in order to have full redundancy:
http://www.redhat.com/f/summitfiles/presentation/May31/Clustering%20and%20Storage/StorageUninterrupted.pdf
btw, HP does offer an iSCSI head unit that you can then daisy-chain SCSI or
SATA enclosures off of -- so if you really want SCSI disks that would be an
option:
http://h18006.www1.hp.com/products/storageworks/msa1510i/index.html
I haven't tested it (and last I heard HP only officially supported it in
Windows) but if anyone else on the list has experience with it I'd be
curious to hear it.
now that I have a 10gig switch available to me I'm also curious to try out
a 10gig iSCSI enclosure but haven't seen any on the market...
-alan
------------------------------
Message: 5
Date: Tue, 15 Aug 2006 21:29:59 +0100
From: Brendan Heading <brendanheading@xxxxxxxxxxx>
Subject: Setting up a GFS cluster
To: linux-cluster@xxxxxxxxxx
Message-ID: <44E22EC7.8020506@xxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi all,
I'm planning to build a cluster using a pair of PE1950s, using RHEL 3
(or 4) with RHCS. Plan at the moment is to use GFS. Most of our stuff is
Dell, therefore the obvious choice is to use a Dell PowerVault 220S as
the shared storage device.
Before I kick off with this idea I'd be interested to hear if anyone had
any issues with this kind of setup, or if there were any general
performance problems. Are there other SCSI enclosures which might be
better or more appropriate for these purposes ?
Regards
Brendan
------------------------------
Message: 7
Date: Tue, 15 Aug 2006 23:23:40 -0300
From: "Celso K. Webber" <celso@xxxxxxxxxxxxxxxx>
Subject: Re: Setting up a GFS cluster
To: linux clustering <linux-cluster@xxxxxxxxxx>
Message-ID: <44E281AC.4010608@xxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hello Brendan,
Although Dell hardware is an excellent choice for Linux, the PV220S
solution is terrible at performance under a cluster environment.
The reason is that the PV220S itself does not manage RAID devices, it is
in fact a JBOD (Just a Bunch Of Disks). The RAID management is done by
the SCSI controllers within the servers (PERC 3/DC or PERC 4/DC).
Since there is a possibility of one of the machines going down, together
with data in the controller's write cache, this solution automatically
disable the write cache (write through mode) when you set the
controllers in "cluster mode".
The end result is very poor performance, specially on write operations.
It's not uncommon that Dell provides the PV-220S with 15K RPM disks to
compensate this performance penalty due to lack of write cache.
As far as I can tell, Red Hat did support the PV220S solution in the
past, during the RHEL 2.1 era, but it is not supported anymore as
certified shared storage for cluster solutions (RHCS or RHGFS).
If you still plan to go on, be warned that the PV220S performs better in
Cluster Mode if you set up the data transfer rate to 160 MB/s instead of
320 MB/s (the PERC 3/DC supports transfer rates of up to 160 MB/s while
the PERC 4/DC supports up to 320 MB/s). This is a known issue at Dell
support queues.
As an extra information, there were too many problems about reliability
with the PV220S when used in Cluster Mode, this can be seen by the large
amount of firmware updates for the PERC 3/DC and 4/DC (LSI Logic based
chipset, megaraid driver on Linux). More recent firmware versions seem
to have corrected most logical drive corruption problems I've
experienced, so I believe the PV220S is still worth a try if you can
live with the poor performance issue.
Maybe a Dell|EMC AX-100 using iSCSI could a better choice with a not so
high price tag.
Sorry for the long message, I believe this information can be useful to
others.
Best regards,
Celso.
Brendan Heading escreveu:
Hi all,
I'm planning to build a cluster using a pair of PE1950s, using RHEL 3
(or 4) with RHCS. Plan at the moment is to use GFS. Most of our stuff is
Dell, therefore the obvious choice is to use a Dell PowerVault 220S as
the shared storage device.
Before I kick off with this idea I'd be interested to hear if anyone had
any issues with this kind of setup, or if there were any general
performance problems. Are there other SCSI enclosures which might be
better or more appropriate for these purposes ?
Regards
Brendan
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster