Re: Raid 5 on 2.5.50/2.5.51, dirty array, kernel panic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>I set up an IDE raid array as such:
>  /dev/md1 hda1 hdc1 hde1 hdg1 RAID 1 (ext2)
>  /dev/md0 hda2 hdc2 hde2 hdg2 RAID 5 (reiserfs)
>... and physically remove a drive (hda) ...
> <0>Kernel panic: Attempted to kill init!

^^^ The above also applies if you put hda back in after
it has been marked bad by it not being there after the
goto abort (see below) has been removed.

This appears to be stemming from a 3.5.33/34? patch that included:

   if (mddev->degraded == 1 &&
	    !(mddev->state & (1<<MD_SB_CLEAN))) {
      printk(KERN_ERR "raid5: cannot start dirty degraded array for md%d\n", mdidx(mddev));
      goto abort;
   }

If you remove the goto abort; there this fixes the problem.

I imagine this needs to check to see if only one device is bad,
and if so, continue.

I would have expected an "unable to mount root" error rather then a kernel
panic so there must be something wrong (probably not very critical) elsewhere.

Can someone more familiar with linux-2.5.51/drivers/md/raid5.c and kernel
modules take a look at this please? Thanks!

-eurijk!
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux