On 19/05/2011 11:34, Pol Hallen wrote:
Hi folks :-)
I've a raid6 sw on debian stable and a problem (!):
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdc1[0] sdf1[5] sdg1[4] sdh1[3] sdd1[1]
5860543744 blocks level 6, 64k chunk, algorithm 2 [6/5] [UU_UUU]
so, I think /dev/sde is corrupted disk
How identify this disk?
blkid:
/dev/sdc1: UUID="9bd6372e-e2ea-b1d5-d2bd-c3cbad12f41d"
TYPE="linux_raid_member"
/dev/sdd1: UUID="9bd6372e-e2ea-b1d5-d2bd-c3cbad12f41d"
TYPE="linux_raid_member"
/dev/sde1: UUID="9bd6372e-e2ea-b1d5-d2bd-c3cbad12f41d"
TYPE="linux_raid_member"
/dev/sdf1: UUID="9bd6372e-e2ea-b1d5-d2bd-c3cbad12f41d"
TYPE="linux_raid_member"
/dev/sdg1: UUID="9bd6372e-e2ea-b1d5-d2bd-c3cbad12f41d"
TYPE="linux_raid_member"
/dev/sdh1: UUID="9bd6372e-e2ea-b1d5-d2bd-c3cbad12f41d"
TYPE="linux_raid_member"
has same uuid, why?
and now how can I resolve?
You can find out which discs/partitions are meant to be in the array with
mdadm -D /dev/md0
and if as it appears there's one missing you can see what state it's in with
mdadm -E /dev/sde1
(or similar).
You should look through your logs to see if you can see what happened to
it. You should also check its SMART status with e.g.
smartctl -a /dev/sde
If it's not dead or dying, you may be able to re-add it with
mdadm /dev/md0 --add /dev/sde1
Hope this helps!
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html