Re: Software raid on top of lvm logical volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Theo Van Dinter a écrit :

Well it's because we have problems in this way. We have a server connected to 2 EMC Symmetrix where we assign some 70Gb and 40Gb Luns. We used Powerpath to manage the dual path to the Luns and so I first created mirror as this :

raiddev /dev/md0
       raid-level              1
       nr-raid-disks           2
       nr-spare-disks          0
       chunk-size              32
       persistent-superblock   1
       device                  /dev/emcpowera1
       raid-disk               0
       device                  /dev/emcpowerf1
       raid-disk                1
#        failed-disk        1


raiddev /dev/md1 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 chunk-size 32 persistent-superblock 1 device /dev/emcpowerb1 raid-disk 0 device /dev/emcpowerg1 raid-disk 1 # failed-disk 1

raiddev /dev/md2
       raid-level              1
       nr-raid-disks           2
       nr-spare-disks          0
       chunk-size              32
       persistent-superblock   1
       device                  /dev/emcpowerc1
       raid-disk               0
       device                  /dev/emcpowerh1
       raid-disk                1
#    failed-disk        1


raiddev /dev/md3 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 chunk-size 32 persistent-superblock 1 device /dev/emcpowerd1 raid-disk 0 device /dev/emcpoweri1 raid-disk 1 # failed-disk 1 ...... up to raiddev /dev/md9

So the /proc/mdstat give :
Personalities : [raid1]
read_ahead 1024 sectors
Event: 15 md9 : active raid1 emcpowerd1[1] emcpowero1[0]
42829184 blocks [2/2] [UU]
md8 : active raid1 emcpowerc1[1] emcpowern1[0]
42829184 blocks [2/2] [UU]
md7 : active raid1 emcpowerb1[1] emcpowerm1[0]
42829184 blocks [2/2] [UU]
md6 : active raid1 emcpowera1[1] emcpowerl1[0]
42829184 blocks [2/2] [UU]
md5 : active raid1 emcpowerp1[1] emcpowerk1[0]
42829184 blocks [2/2] [UU]
md4 : active raid1 emcpowerj1[1] emcpowere1[0]
71384704 blocks [2/2] [UU]
md3 : active raid1 emcpoweri1[1] emcpowerd1[0]
71384704 blocks [2/2] [UU]
md2 : active raid1 emcpowerc1[0] emcpowerh1[1]
71384704 blocks [2/2] [UU]
md1 : active raid1 emcpowerg1[1] emcpowerb1[0]
71384704 blocks [2/2] [UU]
md0 : active raid1 emcpowerf1[1] emcpowera1[0]
71384704 blocks [2/2] [UU]
unused devices: <none>


But after a while I obtain that :
Personalities : [raid1]
read_ahead 1024 sectors
Event: 10 md9 : active raid1 [dev e9:31][1] [dev e8:e1][0]
42829184 blocks [2/2] [UU]
md8 : active raid1 [dev e9:21][1] [dev e8:d1][0]
42829184 blocks [2/2] [UU]
md7 : active raid1 [dev e9:11][1] [dev e8:c1][0]
42829184 blocks [2/2] [UU]
md6 : active raid1 [dev e9:01][1] [dev e8:b1][0]
42829184 blocks [2/2] [UU]
md5 : active raid1 [dev e8:f1][1] [dev e8:a1][0]
42829184 blocks [2/2] [UU]
md4 : active raid1 [dev e8:91][1] [dev e8:41][0]
71384704 blocks [2/2] [UU]
md3 : active raid1 [dev e8:81][1] [dev e8:31][0]
71384704 blocks [2/2] [UU]
md2 : active raid1 [dev e8:71][1] [dev e8:21][0]
71384704 blocks [2/2] [UU]
md1 : active raid1 [dev e8:61][1] [dev e8:11][0]
71384704 blocks [2/2] [UU]
md0 : active raid1 [dev e8:51][1] [dev e8:01][0]
71384704 blocks [2/2] [UU]
unused devices: <none>


and if we try to rebuild the mirror after after loosing access to one of the EMC, we have really bad result :
Personalities : [raid1]
read_ahead 1024 sectors
Event: 26 md9 : active raid1 emcpowerd1[2] [dev e8:e1][0]
42829184 blocks [2/1] [U_]
[>....................] recovery = 1.4% (630168/42829184) finish=68.1min speed=10315K/sec
md8 : active raid1 emcpowerc1[2] [dev e8:d1][0]
42829184 blocks [2/1] [U_]
md7 : active raid1 emcpowerb1[2] [dev e8:c1][0]
42829184 blocks [2/1] [U_]
md6 : active raid1 emcpowera1[2] [dev e8:b1][0]
42829184 blocks [2/1] [U_]
md5 : active raid1 emcpowerp1[2] [dev e8:a1][0]
42829184 blocks [2/1] [U_]
md4 : active raid1 emcpowerj1[2] [dev e8:41][0]
71384704 blocks [2/1] [U_]
md3 : active raid1 emcpoweri1[2] [dev e8:31][0]
71384704 blocks [2/1] [U_]
md2 : active raid1 emcpowerh1[2] [dev e8:21][0]
71384704 blocks [2/1] [U_]
md1 : active raid1 emcpowerg1[2] [dev e8:11][0]
71384704 blocks [2/1] [U_]
md0 : active raid1 emcpowerf1[2] [dev e8:01][0]
71384704 blocks [2/1] [U_]


So may be it will be better to create a raid device on top of the lvm volume.


On Thu, Oct 28, 2004 at 12:02:06AM +0200, Eric Monjoin wrote:


I would like to know if it's possible (works perfectly) to create a software mirror (md0) on top of 2 LVM logical volumes :



You'd usually want to make your raid devices first, then put LVM on top of it. I can't really think of any benefits of doing it the other way around.



------------------------------------------------------------------------

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


_______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux