Re: GFS, iSCSI, multipaths and RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 21, 2008 at 7:12 PM, Michael O'Sullivan
<michael.osullivan@xxxxxxxxxxxxxx> wrote:
> Hi Alex,
>
> We wanted an iSCSI SAN that has highly available data, hence the need for 2
> (or more storage devices) and a reliable storage network (omitted from the
> diagram). Many of the articles I have read for iSCSI don't address
> multipathing to the iSCSI devices, in our configuration iSCSI Disk 1
> presented as /dev/sdc and /dev/sdd on each server (and iSCSI Disk 2
> presented as /dev/sde and /dev/sdf), but it wan't clear how to let the
> servers know that the two iSCSI portals attached to the same target - thus I
> used mdadm. Also, I wanted to raid the iSCSI disks to make sure the data
> stays highly available - thus the second use of mdadm. Now we had a single
> iSCSI raid array spread over 2 (or more) devices which provides the iSCSI
> SAN. However, I wanted to make sure the servers did not try to access the
> same data simultaneously, so I used GFS to ensure correct use of the iSCSI
> SAN. If I understand correctly it seems like the multipathing and raiding
> may be possible in Red Hat Cluster Suite GFS without using iSCSI? Or to use
> iSCSI with some other software to ensure proper locking happens for the
> iSCSI raid array? I am reading the link you suggested to see what other
> people have done, but as always any suggestions, etc are more than welcome.
>

I would not use multipath I/O with iSCSI unless you have specific
reasons for doing so. iSCSI is only as highly-available as you network
infrastructure allows it to be. If you have a full failover within the
network then you don't need multipath. That simplifies configuration a
lot. Provided your network core is fully redundant (both link and
routing layers), you can connect 2 NICs on each server to separate
switches and bond them (google for "channel bonding"). Once you have
redundant network connection you can use the setup from the article I
posted earlier. This will give you iSCSI endpoint failover.

-Alex

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux