On Mon, Apr 26, 2010 at 4:29 PM, Janos Haar <janos.haar@xxxxxxxxxxxx> wrote: > > ----- Original Message ----- From: "Michael Evans" <mjevans1983@xxxxxxxxx> > To: "Janos Haar" <janos.haar@xxxxxxxxxxxx> > Cc: "MRK" <mrk@xxxxxxxxxxxxx>; <linux-raid@xxxxxxxxxxxxxxx> > Sent: Tuesday, April 27, 2010 1:06 AM > Subject: Re: Suggestion needed for fixing RAID6 > > >> On Mon, Apr 26, 2010 at 3:39 PM, Janos Haar <janos.haar@xxxxxxxxxxxx> >> wrote: >>> >>> ----- Original Message ----- From: "MRK" <mrk@xxxxxxxxxxxxx> >>> To: "Janos Haar" <janos.haar@xxxxxxxxxxxx> >>> Cc: <linux-raid@xxxxxxxxxxxxxxx> >>> Sent: Monday, April 26, 2010 6:53 PM >>> Subject: Re: Suggestion needed for fixing RAID6 >>> >>> >>>> On 04/26/2010 02:52 PM, Janos Haar wrote: >>>>> >>>>> Oops, you are right! >>>>> It was my mistake. >>>>> Sorry, i will try it again, to support 2 drives with dm-cow. >>>>> I will try it. >>>> >>>> Great! post here the results... the dmesg in particular. >>>> The dmesg should contain multiple lines like this "raid5:md3: read error >>>> corrected ....." >>>> then you know it worked. >>> >>> md3 : active raid6 sdd4[12] sdl4[11] sdk4[10] sdj4[9] sdi4[8] dm-1[13](F) >>> sdg4[6] sdf4[5] dm-0[14](F) sdc4[2] sdb4[1] sda4[0] >>> 14626538880 blocks level 6, 16k chunk, algorithm 2 [12/9] [UUU__UU_UUUU] >>> [>....................] recovery = 1.5% (22903832/1462653888) >>> finish=3188383.4min speed=7K/sec >>> >>> Khm.... :-D >>> It is working on something or stopped with 3 missing drive? : ^ ) >>> >>> (I have found the cause of the 2 dm's failure. >>> Now retry runs...) >>> >>> Cheers, >>> Janos >>> >>> >>> >>> >>>> -- >>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in >>>> the body of a message to majordomo@xxxxxxxxxxxxxxx >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> >>> -- >>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in >>> the body of a message to majordomo@xxxxxxxxxxxxxxx >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> >> >> What is displayed there seems like it can't be correct. Please run >> >> mdadm -Evvs >> >> mdadm -Dvvs >> >> and provide the results for us. > > I have wrongly assigned the dm devices (cross-linked) and the sync process > is freezed. > The snapshot is grown to the maximum of space, than both failed with write > error at the same time with out of space. > The md_sync process is freezed. > (I have to push the reset.) > > I think this is correct what we can see, because the process is freezed > before exit and can't change the state to failed. > > Cheers, > Janos > >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > Please reply to all. It sounds like you need a LOT more space. Please carefully try again. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html