Add a "--force" to the --assemble command and it should force the other 2 online even though they are a bit off on events. >From my understanding that may been some of the last data written could be lost/corrupted (those last 4 events on those 2 disks). I would probably also suggest getting another disk and going to raid6, raid5 is pretty scary if disks start going bad. On Sun, Apr 12, 2015 at 4:42 PM, Thomas MARCHESSEAU <marchesseau@xxxxxxxxx> wrote: > Hi team , > > Like probably lot of new subscriber , i mail you, guys, for help . > > Iąm running a raid5 on 7 HDD for several month now ( and years on other > system) without problem . > last week i had a crash disk (sdg) , iąve add a new drive (sdi) and > rebuild .. Works fine , and i dont think this is the cause of my today > problem. > > Yesterday , iąv upgraded my ubuntu 14.10 , and the system warm me with a > message that i canąt recall and rewrite exactly , but something like : > md127 doesnąt not match with /etc/mdadm/mdadm.conf , blah blah , run > /usr/share/mdadm/mkconf , and fix /etc/mdadm/mdadm.conf > > iąve done it , and reboot , all looks good . > All the drive have been rename after reboot ( orginal sdg was extract form > the bay ) > > Iąve setup a rsync of my most important data on a external drive this > night, who partially failed (only 25% ha been backuped , bad luck ) , > (probably) because this morning i have re-inserted by mistake the faulty > drive ( for information , i think the drive was in fact ok , the sata > connector was a bit disconnect ) > > I did not pay attention of the situation at the moment , but few hour > later , i ssh my filer and my « home » (on the raid partition) was not > available anymore . > I didnąt try to fschk or any thing else than : > > Mdadm ‹stop /dev/md127 > mdadm --assemble /dev/md127 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf > /dev/sdg /dev/sdh > mdadm: /dev/md127 assembled from 5 drives - not enough to start the array. > > > So iąve read a bunch of usefull link , one of them :) , > https://raid.wiki.kernel.org/index.php/RAID_Recovery , says , donąt do > stupid thing until drop a mail on linux-raid mailling Š so iąm here . > > iąve collected this usefull info : > mdadm --examine /dev/sd[a-z] | egrep 'Event|/dev/sd' > /dev/sda: (system HDD ) > /dev/sdb: > Events : 21958 > /dev/sdc: > Events : 21958 > /dev/sdd: > Events : 21958 > /dev/sde: > Events : 21958 > /dev/sdf: > Events : 21958 > /dev/sdg: > Events : 21954 <‹ here > /dev/sdh: > Events : 21954 <‹ and here > > > > iąve also a full copy of mdadm ‹examine > > The strange thing is that my raid array is now seen as a RAID0 in > mdadm --detail /dev/md127 > /dev/md127: > Version : > Raid Level : raid0 > Total Devices : 0 > > State : inactive > > > But individually all drive in mdadm ‹examine , are RAID 5 member . > > Anyone for help ? > > ią was on the way to perform a > mdadm --create --assume-clean ‹level=5 --raid-devices=7 --size=11720300544 > /dev/md127 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh > > Which looks a bit stupid before ask for help > > Regards thomas > > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html