RE: Mirror between different SAN fabrics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

Looks nice your solution. 
But I just found out that unlike lvm2, mdadm is not cluster aware. It
seems not possible to transfer RAID state information from one node to
another.
As we use Red Hat Cluster Suite, we depend on a cluster solution.

Regards Mathias

> -----Original Message-----
> From: linux-lvm-bounces@redhat.com 
> [mailto:linux-lvm-bounces@redhat.com] On Behalf Of 
> Christian.Rohrmeier@SCHERING.DE
> Sent: Donnerstag, 28 Dezember, 2006 09:49
[...] 
> Hi,
> 
> Here is a nice example from one of my RHEL 4 Oracle servers:
> 
> We have three layers:
> 
> first the LUNs from the SAN are multipathed to device aliases:
> 
> [root@ ~]# multipath -ll
> sanb (XXXX60e8003f653000000XXXX000001c7)
> [size=101 GB][features="1 queue_if_no_path"][hwhandler="0"] 
> \_ round-robin 0 [active]  \_ 0:0:1:1 sdb 8:16 
> [active][ready]  \_ 1:0:1:1 sdd 8:48 [active][ready]
> 
> sana (XXXX60e80039cbe000000XXXX000006ad)
> [size=101 GB][features="1 queue_if_no_path"][hwhandler="0"] 
> \_ round-robin 0 [active]  \_ 0:0:0:1 sda 8:0  
> [active][ready]  \_ 1:0:0:1 sdc 8:32 [active][ready]
> 
> Next these multipath aliases are RAIDed:
> 
> [root@ ~]# mdadm --detail /dev/md0
> /dev/md0:
>         Version : 00.90.01
>   Creation Time : Thu Nov  2 13:07:01 2006
>      Raid Level : raid1
>      Array Size : 106788160 (101.84 GiB 109.35 GB)
>     Device Size : 106788160 (101.84 GiB 109.35 GB)
>    Raid Devices : 2
>   Total Devices : 2
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>     Update Time : Thu Dec 28 09:36:19 2006
>           State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
> 
> 
>     Number   Major   Minor   RaidDevice State
>        0     253        2        0      active sync   /dev/mapper/sana
>        1     253        3        1      active sync   /dev/mapper/sanb
>            UUID : b5ac4ae9:99da8114:744a7ebb:aba6f687
>          Events : 0.4254576
> 
> And finally, the RAID device is used with LVM:
> 
> [root@ ~]# vgs -o +devices
>   VG   #PV #LV #SN Attr   VSize   VFree Devices
>   vg00   2   2   0 wz--n-  31.78G    0  /dev/cciss/c0d0p2(0)
>   vg00   2   2   0 wz--n-  31.78G    0  /dev/cciss/c0d0p4(0)
>   vg00   2   2   0 wz--n-  31.78G    0  /dev/cciss/c0d0p2(250)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(0)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(5120)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(5376)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(5632)
>   vg01   1   5   0 wz--n- 101.84G    0  /dev/md0(8192)
> 
> This works very well, as both paths and mirrors are able to 
> break away without any disruption in disk access.
> 
> Cheers,
> 
> Christian

Sicherheitshinweis:
Dieses E-Mail von PostFinance ist signiert. Weitere Informationen finden Sie unter: 
https://www.postfinance.ch/e-signature.
Geben Sie Ihre Sicherheitselemente niemals Dritten bekannt.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux