Re: General FC Question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I will recommend you again using multipath-tools which uses
device-mapper. Here is an example.

# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   RAID                             ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 00 Lun: 01
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 00 Lun: 02
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 01 Lun: 00
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   RAID                             ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 01 Lun: 01
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 01 Lun: 02
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 00 Lun: 00
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   RAID                             ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 00 Lun: 01
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 00 Lun: 02
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 01 Lun: 00
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   RAID                             ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 01 Lun: 01
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   Direct-Access                    ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 01 Lun: 02
 Vendor: HP       Model: HSV100           Rev: 3025
 Type:   Direct-Access                    ANSI SCSI revision: 02

That output wont help you to identify each disk.

Here is the multipath-tools output from those disks:
# multipath -v3
..... truncated output .......
3600508b4000116370000a00000c00000 0:0:0:1 sda 8:0 [ready]
3600508b40001168a0000e00000090000 0:0:0:2 sdb 8:16 [ready]
3600508b4000116370000a00000c00000 0:0:1:1 sdc 8:32 [faulty]
3600508b40001168a0000e00000090000 0:0:1:2 sdd 8:48 [faulty]
3600508b4000116370000a00000c00000 1:0:0:1 sde 8:64 [ready]
3600508b40001168a0000e00000090000 1:0:0:2 sdf 8:80 [ready]
3600508b4000116370000a00000c00000 1:0:1:1 sdg 8:96 [faulty]
3600508b40001168a0000e00000090000 1:0:1:2 sdh 8:112 [faulty]
..... truncated output .......

It finds each disk WWN and groups them.
# multipath -l
storage.old ()
[size=100 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [active]
 \_ 0:0:0:1 sda 8:0   [active]
 \_ 0:0:1:1 sdc 8:32  [active]
 \_ 1:0:0:1 sde 8:64  [active]
 \_ 1:0:1:1 sdg 8:96  [failed]

storage ()
[size=100 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [active]
 \_ 0:0:0:2 sdb 8:16  [active]
 \_ 0:0:1:2 sdd 8:48  [failed]
 \_ 1:0:0:2 sdf 8:80  [active]
 \_ 1:0:1:2 sdh 8:112 [failed]

And WWNs are aliased to names in the /etc/multipath.conf file, example:
----------------------
multipath {
               wwid    3600508b4000116370000a00000c00000
               alias   storage.old
       }
multipath {
               wwid    3600508b40001168a0000e00000090000
               alias   storage
       }
---------------------

So it will create /dev/mapper/storage and /dev/mapper/storage.old

# dmsetup ls
storage.old     (253, 4)
storage3        (253, 3)
storage2        (253, 2)
storage1        (253, 1)
storage (253, 0)
storage.old2    (253, 6)
storage.old1    (253, 5)

My devices are partiotioned so, direct access for each partition is
automatically created.

/dev/mapper/storage (hole disk)
/dev/mapper/storage1 (first partition)
/dev/mapper/storage2 (second)
/dev/mapper/storage3 (third)

This way you can tell gfs to access those mapper devices which dont
care about the order of found disks, just WWNs.

About the device-mapper question, you can clean all devices by doing
# dmsetup remove_all

Or just remove one device
# dmsetup remove storage


I hope this helps you.
Regards,
Jaime.


2006/9/26, isplist@xxxxxxxxxxxx <isplist@xxxxxxxxxxxx>:
PS: Is my problem hard loop ID's or LUN's? Could I achieve what I need either
way or is it one or thew other?


On Tue, 26 Sep 2006 07:44:00 -0400, Kovacs, Corey J. wrote:
> One more thing, when using more than one path (basically anyu san setup) the
>
> device
> mappings will wrap around for every path, so for two paths... single hba,
> dual controller..
>
>
> three disks will look like this...
>
> disk1=/dev/sda
> disk2=/dev/sdb
> disk3=/dev/sdc
> disk1=/dev/sdd
> disk2=/dev/sde
> disk3=/dev/sde
>
> and four like this..
>
> disk1=/dev/sda
> disk2=/dev/sdb
> disk3=/dev/sdc
> disk4=/dev/sdd
> disk1=/dev/sde
> disk2=/dev/sde
> disk3=/dev/sdf
> disk4=/dev/sdg
>
>
> Or for dual hba, dual controller (4 paths)
>
>
> disk1=/dev/sda
> disk2=/dev/sdb
> disk3=/dev/sdc
> disk4=/dev/sdd
> disk1=/dev/sde
> disk2=/dev/sde
> disk3=/dev/sdf
> disk4=/dev/sdg
> disk1=/dev/sdh
> disk2=/dev/sdi
> disk3=/dev/sdj
> disk4=/dev/sdk
> disk1=/dev/sdl
> disk2=/dev/sdm
> disk3=/dev/sdn
> disk4=/dev/sdo
>
> etc...
>
> Cheers
>
> With the Qlogic drivers in failover mode, you'll get this..
>
> disk1=/dev/sda
> disk2=/dev/sdb
> disk3=/dev/sdc
> disk4=/dev/sdd
>
> even though there are multiple paths
>
>
> Corey
>
> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx
> [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Kovacs, Corey J.
> Sent: Tuesday, September 26, 2006 7:38 AM
> To: isplist@xxxxxxxxxxxx; linux clustering
> Subject: RE:  General FC Question
>
> You don't say which FC cards you are using but if it's qlogic, then the
> driver can be set to combine the devices. Basically whats happened is that
> your machine is picking up the alternate path to the device, which is a
> perfectly valid thing to do, it's just not what you need at this point. It
> may be as simple as your
>
> secondary controller actually has the lun you are trying to access. To work
> around yo might just be able to reset the seconday controller and force the
> primary to take over the LUN. This happens quite a bit depending on your
> setup. The Qlogic drivers, when setup for failover, will coelesce the
> devices
> into a single device by the WWID of the LUN. If that's not an option, then
> try the multipath tools support in
> RHEL4.2
> or above. You won't be using the /dev/sd{a,b,c,...} devices, rather it'll be
> /dev/mpath/mpath0 etc, or whatever you set them to instead.
>
> Even without failover, the latest Qlogic drivers will make both paths active
> so that you never end up with a dead path upon boot up.
>
>
> Hope this helps.
>
>
> Corey
>
> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx
> [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of isplist@xxxxxxxxxxxx
> Sent: Monday, September 25, 2006 11:18 AM
> To: linux-cluster
> Subject:  General FC Question
>
> After adding storage, my cluster comes up with different /dev/sda, /dev/sdb,
> etc settings. My initial device now comes up as sdc when it used to be sda.
>
> Is there some way of allowing GFS to see the storage in some way that it can
> know which device is which when I add a new one or remove one, etc?
>
> Hard loop ID's on the FC side I think but is there anything on the GFS side?
>
> Mike
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster




--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux