Tristram, I'm mounting iSCSI targets on my cluster nodes. The iSCSI target machines are not cluster nodes but they do exist on the same private subnet as the cluster nodes. I then put my iSCSI mounts into a cluster aware volume group and create volumes inside that volume group. I then format my volumes with GFS and mount them on the appropriate nodes. Some volumes are mounted on all nodes, some are not. I'm doing it this way because I find it *much* easier to manage my volumes from the cluster nodes instead of the independent storage devices. I realize clvmd isn't for everyone though. Either way, my experience with the major Linux software iSCSI drivers is pretty good. I assume you get much better performance with high end disk if you use an iSCSI HBA instead of loading your server CPU but it works for me at near local speeds of my 3ware SATA RAID5 disk servers. The nice thing about this whole setup is I *can* use FC or GNBD or Infiniband->(I think, dont quote me) later if I want as anything I can mount on the cluster nodes as a block device can be utilized as shared storage. For us, this was a "selling" point for RHCS/GFS. We had disk we wanted to include in a SAN environment with the ability to add any kind of backend storage we want. RHCS/GFS delivered. -- Ryan Thomson > Thanks for the reply, > > So i take it that you are exporting LVM volume groups? i'm tring to > avoid placing many services on the cluster so avoiding CLVM is a big one > for me. I was planning on exporting the Logical Volumes, about 28 or so > in total via iSCSI. > > The reason for avoiding Cluster Suite for anymore than GFS is that with > XEN (3 real servers, 9 virtual ones) that the cluster can stop > functioning correctly if all the virtual servers go down but the > physical ones remain working fine. I'm yet to find a way to stop the > virtual servers bringing down the whole cluster - this may be possable > but i'm rather new to RHCS :) > > Cheers > > Tristram > > Ryan Thomson wrote: > >>Hi Tristram, >> >>I'm just about to put a completely Linux based software iSCSI RedHat >>Cluster with GFS into production. We have four RHEL4AS machines acting >> as >>cluster nodes and an 8TB RHEL4AS server is exporting disk as two >>arrays/iSCSI targets to the cluster nodes. Several more storage boxes we >>already own (previously Linux servers exporting large arrays over NFS) >> are >>to be added as iSCSI targets. >> >>I am using the iSCSI initiator that comes with RHEL4U2, the cisco open >>source one it is I believe. For targets, I'm using the iSCSI Enterprise >>Target (http://sourceforge.net/projects/iscsitarget/). The only thing I >>found so far that doesn't seem to work is the iSCSI alias. I can't seem >> to >>get the alias I set in the target to show up on the initiator. I don't >>know if the problem is the target or the initiator as I haven't found >>anything online yet about this issue. The currently available Linux >> iSCSI >>software seems to work pretty much flawlessly for me otherwise. >> >>So far it's been quite easy and painless setting up CLVM volumes and >>putting GFS on them, I even wrote a basic wrapper script to do all the >>work for me, streamlining the proceedure. Filesystem expansion seems to >>work as expected. I haven't played with snapshots. >> >>Initial numbers show transfer rates from end to end (NFS clients to >>Cluster NFS server to GFS) to be better for iSCSI than GNBD. Keep in >> mind >>these are initial tests using bonnie++ and using 'time' to time file >>copies of various sizes, nothing concrete. I suspected NFS to be a >>bottleneck but it seems that storage interconnect/fabric protocol still >>makes a difference even with NFS being crappy to the clients. >> >>>From cluster nodes to storage I found transfer rates to be near local >>> max >>with iSCSI, again don't trust me though, do you own tests. My hardware >>doesn't have very high end disk, just SATA with 3ware 9500 cards. I >> didn't >>do the cluster node to storage test with GNBD :( >> >>Anyways, so far my initial experience has been great. I solved an issue >>causing my cluster nodes to kernel panic and ever since, it's been >> running >>very well serving Apache, MySQL, OpenLDAP and NFS exports. I haven't >> fully >>stress tested it yet as I don't have a workable means to do so right >> now, >>besides migrating users over slowly. >> >>I have zero experience with Xen so I can't help you there. >> >>I hope that helps. >> >>-- >>Ryan Thomson >>Systems Administrator >>University Of Calgary Biocomputing >>http://moose.bio.ucalgary.ca/ >> >> >> >>>Hi all, >>> >>>is anyone in a production setting using software Iscsi Targets and >>>Initiators as a side options to GNBD? i'm exploring all our options for >>>a Xen/Cluster Suite n+2 server setup for our ISP and would like to hear >>>peoples thoughts on the best option, rather than using a SAN with FC we >>>have decided to go with a Intel Raid Array with standard linux to >>> reduce >>>inital costs and need to find out what people are using in production >>> to >>>export block devices. >>> >>>Thanks in advance >>> >>>Tristram >>> >>>-- >>> >>>Linux-cluster@xxxxxxxxxx >>>https://www.redhat.com/mailman/listinfo/linux-cluster >>> >>> >>> >> >> >> >> > > > -- > > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Ryan -- Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster