Re: Software raid on top of lvm logical volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 28, 2004 at 08:01:27AM +0200, Eric Monjoin wrote:
Theo Van Dinter a écrit :

Well it's because we have problems in this way. We have a server connected to 2 EMC Symmetrix where we assign some 70Gb and 40Gb Luns. We used Powerpath to manage the dual path to the Luns and so I first created mirror as this :


But after a while I obtain that :
Personalities : [raid1]
read_ahead 1024 sectors
Event: 10 md9 : active raid1 [dev e9:31][1] [dev e8:e1][0]
42829184 blocks [2/2] [UU]
and if we try to rebuild the mirror after after loosing access to one of
this might be a problem with powerpath, it seems harmless tough

the EMC, we have really bad result :
Personalities : [raid1]
read_ahead 1024 sectors
Event: 26 md9 : active raid1 emcpowerd1[2] [dev e8:e1][0]
42829184 blocks [2/1] [U_]
[>....................] recovery = 1.4% (630168/42829184) finish=68.1min speed=10315K/sec
this is a problem with linux md raid. it does not support fast resync
with a bitmap of changed sectors, there is a project to implement it,
but it is not yet in the standard kernel.
look on the linux-raid mailing list archives for something called 'fast
raid 1'.
i do not believe you would have any advantage in stacking md above lvm
anyway.
Regards,
Luca

--
Luca Berra -- bluca@comedia.it
       Communication Media & Services S.r.l.
/"\
\ /     ASCII RIBBON CAMPAIGN
 X        AGAINST HTML MAIL
/ \

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux