I'd have to say, I'm currently rating in the top 10 most unlucky people in
the linux software raid world right now.
I'm terrified of rebooting; I had 136 days uptime, and I needed to swap out
my wireless card. So, I shut down my system. For some time now, anytime I
reboot... my raid wants to rebuild a disk. It comes up 27 disks, 1 spare -
clean, degraded, rebuilding. The disk rebuilds, and everything is hunky
dory. It's a nusance, but oh well. I've had so many issues in the past where
my raid would randomly kick disks (ended up being firmware) that I've always
been greatful on how stable it's been since those issues were resolved. A
few months back, I actually had a disk fail. I swapped in a spare, it
rebuilt - all good.
Well today, during this illustrious rebuild... it appears I actually DID
have a disk fail. So, I have 26 disks... 1 partially rebuilt, and 1 failed.
Hoping and praying that a rebuild didn't actually wipe the disk and maybe
just synced things up -- I did a create with the 26 disks + 1 partially
rebuilt and 1 'missing disk'.... well, the array came up.... but I get
access denied on a zillion things, and the filesystem is freaking out.
Before I proceed any further... what are my options? Do I have any options?
I could run a fsck... but I held off fearing it could just make things
worse.
Thanks,
David M. Strang
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html