Re: Raid5 race patch (fwd)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 14 Mar 2002, Neil Brown wrote:

Hi !

Perhaps boot sequence of raid related thing would be informative:

md: raid1 personality registered as nr 3
md: raid5 personality registered as nr 4
raid5: measuring checksumming speed
   8regs     :  1730.800 MB/sec
   32regs    :  1228.400 MB/sec
   pIII_sse  :  2061.200 MB/sec
   pII_mmx   :  2245.200 MB/sec
   p5_mmx    :  2363.600 MB/sec
raid5: using function: pIII_sse (2061.200 MB/sec)
md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
md: Autodetecting RAID arrays.
 [events: 00000014]
 [events: 0000000a]
 [events: 00000006]
 [events: 00000014]
 [events: 0000000a]
 [events: 00000006]
 [events: 00000014]
 [events: 0000000a]
 [events: 00000006]
md: autorun ...
md: considering hdi3 ...
md:  adding hdi3 ...
md:  adding hdg3 ...
md:  adding hde3 ...
md: created md2
md: bind<hde3,1>
md: bind<hdg3,2>
md: bind<hdi3,3>
md: running: <hdi3><hdg3><hde3>
md: hdi3's event counter: 00000006
md: hdg3's event counter: 00000006
md: hde3's event counter: 00000006
md2: max total readahead window set to 496k
md2: 2 data-disks, max readahead per data-disk: 248k
raid5: device hdi3 operational as raid disk 2
raid5: device hdg3 operational as raid disk 1
raid5: device hde3 operational as raid disk 0
raid5: allocated 3291kB for md2
raid5: raid level 5 set md2 active with 3 out of 3 devices, algorithm 0
RAID5 conf printout:
 --- rd:3 wd:3 fd:0
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde3
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg3
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi3
RAID5 conf printout:
 --- rd:3 wd:3 fd:0
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde3
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg3
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi3
md: updating md2 RAID superblock on device
md: hdi3 [events: 00000007]<6>(write) hdi3's sb offset: 538112
md: hdg3 [events: 00000007]<6>(write) hdg3's sb offset: 538112
md: hde3 [events: 00000007]<6>(write) hde3's sb offset: 538112
md: considering hdi2 ...
md:  adding hdi2 ...
md:  adding hdg2 ...
md:  adding hde2 ...
md: created md1
md: bind<hde2,1>
md: bind<hdg2,2>
md: bind<hdi2,3>
md: running: <hdi2><hdg2><hde2>
md: hdi2's event counter: 0000000a
md: hdg2's event counter: 0000000a
md: hde2's event counter: 0000000a
md1: max total readahead window set to 496k
md1: 2 data-disks, max readahead per data-disk: 248k
raid5: device hdi2 operational as raid disk 2
raid5: device hdg2 operational as raid disk 1
raid5: device hde2 operational as raid disk 0
raid5: allocated 3291kB for md1
raid5: raid level 5 set md1 active with 3 out of 3 devices, algorithm 0
RAID5 conf printout:
 --- rd:3 wd:3 fd:0
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde2
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg2
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi2
RAID5 conf printout:
 --- rd:3 wd:3 fd:0
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hde2
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdg2
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi2
md: updating md1 RAID superblock on device
md: hdi2 [events: 0000000b]<6>(write) hdi2's sb offset: 59456448
md: hdg2 [events: 0000000b]<6>(write) hdg2's sb offset: 59456448
md: hde2 [events: 0000000b]<6>(write) hde2's sb offset: 59456448
md: considering hdi1 ...
md:  adding hdi1 ...
md:  adding hdg1 ...
md:  adding hde1 ...
md: created md0
md: bind<hde1,1>
md: bind<hdg1,2>
md: bind<hdi1,3>
md: running: <hdi1><hdg1><hde1>
md: hdi1's event counter: 00000014
md: hdg1's event counter: 00000014
md: hde1's event counter: 00000014
md: RAID level 1 does not need chunksize! Continuing anyway.
md0: max total readahead window set to 124k
md0: 1 data-disks, max readahead per data-disk: 124k
raid1: device hdi1 operational as mirror 2
raid1: device hdg1 operational as mirror 1
raid1: device hde1 operational as mirror 0
raid1: raid set md0 active with 3 out of 3 mirrors
md: updating md0 RAID superblock on device
md: hdi1 [events: 00000015]<6>(write) hdi1's sb offset: 56128
md: hdg1 [events: 00000015]<6>(write) hdg1's sb offset: 56128
md: hde1 [events: 00000015]<6>(write) hde1's sb offset: 56128
md: ... autorun DONE.

> Ok, I think I have it.
> Any call to MD_BUG would try to claim some semaphores, but if they
> were already claim..... things freeze up.
> 
> MD_BUG is called in a number of situations that aren't really bugs,
> like when you try to remove an active (i.e. not failed) drive from an
> array.
> 
> It could be that you were hitting a benign BUG message and this was
> deadlocking.
> 
> Please apply the following patch (which just removes lock from MD_BUG)
> and try again.  Thanks.
> 
> NeilBrown
> 
> --- ./drivers/md/md.c	2002/03/14 00:37:07	1.2
> +++ ./drivers/md/md.c	2002/03/14 00:37:19
> @@ -872,8 +872,8 @@
>  	printk("md:	**********************************\n");
>  	printk("md:	* <COMPLETE RAID STATE PRINTOUT> *\n");
>  	printk("md:	**********************************\n");
> -	down(&all_mddevs_sem);
> -	ITERATE_MDDEV_LOCK(mddev,tmp) {
> +/*	down(&all_mddevs_sem); */
> +	ITERATE_MDDEV/*_LOCK*/(mddev,tmp) {
>  		printk("md%d: ", mdidx(mddev));
>  
>  		ITERATE_RDEV(mddev,rdev,tmp2)
> @@ -888,7 +888,7 @@
>  		ITERATE_RDEV(mddev,rdev,tmp2)
>  			print_rdev(rdev);
>  	}
> -	up(&all_mddevs_sem);
> +/*	up(&all_mddevs_sem); */
>  	printk("md:	**********************************\n");
>  	printk("\n");
>  }
> 

__________________________________________________________________
|    Matjaz Godec    |    Agenda d.o.o.    |   ISP for business  |
|   Tech. Manager    |   Gosposvetska 84   |     WAN networks    |
|   gody@slon.net    |   si-2000 Maribor   |  Internet/Intranet  |
| tel:+386.2.2340860 |      Slovenija      | Application servers |
|http://www.slon.net |http://www.agenda.si |  Caldera OpenLinux  |

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux