On 14 February 2011 16:33, Reynald Borer <reynald.borer@xxxxxxxxx> wrote: > Hi, > > Nice catch for the 1 bit difference, I didn't see it. My point > regarding bitmap reconstruction was because this raid was used in a > LVM setup. the LVM partition used two distinct raid 1 disks, and I was > not able to start the LVM correctly without this failing raid. > > In the end, I was able to save my LVM by simply skipping the raid 1 > and using directly one partition. The LVM tool was clever enough to > detect the MD bits and proposed me to remove them in order to mount > directly the partition, and it worked fine. Thus I was able to save my > data. > > Thanks for your answer though. > > Regards, > Reynald > > > On Thu, Jan 27, 2011 at 9:53 PM, NeilBrown <neilb@xxxxxxx> wrote: >> On Wed, 26 Jan 2011 21:58:25 +0100 Reynald Borer <reynald.borer@xxxxxxxxx> >> wrote: >> >>> Hello guys, >>> >>> I have been using md raids for quite a long time now and it always >>> worked fine, until recently when I upgraded some hardware on my >>> workstation. Unfortunately the hardware I changed proved itself to be >>> very unstable, and I encountered a lot of hard lockups of the system >>> while running. Those lockups recently made one of my raid 1 array >>> fails with the infamous error message "mdXX: bitmap superblock UUID >>> mismatch". >>> >>> Here is what I have found in the kernel logs when I try to activate >>> the given raid group: >>> ----------------- >>> md/raid1:md126: active with 2 out of 2 mirrors >>> md126: bitmap superblock UUID mismatch >>> md126: bitmap file superblock: >>>     Âmagic: 6d746962 >>>    Âversion: 4 >>>      uuid: 37102258.af9c1930.b8397fb8.eba356af >>              ^ this is an 'a' >> >>>     events: 199168 >>> events cleared: 199166 >>>     Âstate: 00000000 >>>   Âchunksize: 524288 B >>>  daemon sleep: 5s >>>   Âsync size: 248075584 KB >>> max write behind: 0 >>> md126: failed to create bitmap (-22) >>> ----------------- >>> >>> >>> Such error messages are displayed each time I try to run the raid >>> group. Content of /proc/mdstat is: >>> ----------------- >>> md126 : inactive sdb6[0] sda6[1] >>>    496151168 blocks >>> ----------------- >>> >>> >>> If I try to examine both disks with mdadm -E it shows some checksum >>> mismatch for both partitions: >>> ----------------- >>> root@bob # mdadm -E /dev/sda6 >>> /dev/sda6: >>>      Magic : a92b4efc >>>     Version : 0.90.03 >>>      ÂUUID : 37102258:bf9c1930:b8397fb8:eba356af >>               ^ this is a 'b' >> >> So you certainly do have some sick hardware!!! >> >> I suggest that you find some hardware that you can trust, >> mount one of the two devices ( (sdb6 or sda6) ignoring the raid stuff, >> and copy data off to the device that you trust. >> >> Then start again. >> >> NeilBrown >> >> >>>  Creation Time : Mon Aug Â7 21:06:47 2006 >>>   ÂRaid Level : raid1 >>>  Used Dev Size : 248075584 (236.58 GiB 254.03 GB) >>>   ÂArray Size : 248075584 (236.58 GiB 254.03 GB) >>>  ÂRaid Devices : 2 >>>  Total Devices : 2 >>> Preferred Minor : 6 >>> >>>   Update Time : Wed Jan 12 00:12:44 2011 >>>      State : clean >>> ÂActive Devices : 2 >>> Working Devices : 2 >>> ÂFailed Devices : 0 >>>  Spare Devices : 0 >>>    ÂChecksum : e4883f8e - expected e4883e8e >>>     ÂEvents : 199168 >>> >>> >>>    Number  Major  Minor  RaidDevice State >>> this   1    8    38    Â1   Âactive sync >>> >>>  Â0   0    8    70    Â0   Âactive sync >>>  Â1   1    8    38    Â1   Âactive sync >>> root@bob # mdadm -E /dev/sdb6 >>> /dev/sdb6: >>>      Magic : a92b4efc >>>     Version : 0.90.03 >>>      ÂUUID : 37102258:bf9c1930:b8397fb8:eba356af >>>  Creation Time : Mon Aug Â7 21:06:47 2006 >>>   ÂRaid Level : raid1 >>>  Used Dev Size : 248075584 (236.58 GiB 254.03 GB) >>>   ÂArray Size : 248075584 (236.58 GiB 254.03 GB) >>>  ÂRaid Devices : 2 >>>  Total Devices : 2 >>> Preferred Minor : 6 >>> >>>   Update Time : Wed Jan 12 00:12:44 2011 >>>      State : clean >>> ÂActive Devices : 2 >>> Working Devices : 2 >>> ÂFailed Devices : 0 >>>  Spare Devices : 0 >>>    ÂChecksum : e4883fac - expected e4883eac >>>     ÂEvents : 199168 >>> >>> >>>    Number  Major  Minor  RaidDevice State >>> this   0    8    70    Â0   Âactive sync >>> >>>  Â0   0    8    70    Â0   Âactive sync >>>  Â1   1    8    38    Â1   Âactive sync >>> ----------------- >>> >>> >>> Any idea how I could try to save my raid group? >>> >>> Thanks in advance for your help. >>> >>> Best Regards, >>> Reynald >>> -- >>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in >>> the body of a message to majordomo@xxxxxxxxxxxxxxx >>> More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html >> >> > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html > Wow, that's cool. I suppose RAID1 is simple enough for this to be possible, though. Still, cool. :-) // M -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html