Re: Raid 1 array degrades on reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 17 Jun 2010 11:15:11 +0100
David Watson <David.Watson@xxxxxxxxxx> wrote:

> Hello,
> 
> Recently I upgraded to 2Tb disks on one of my servers, I built a new
> degraded raid1 array:
> mdadm --create /dev/md7 -level 1 --raid-devices=2 missing /dev/sdd1
> 
> added its entry to /etc/mdadm/mdadm.conf then I rebooted to add the
> second disk and added it to the array:
> mdadm --manage /dev/md7  --add /dev/sdb1
> 
> I updated /etc/mdadm/mdadm.conf, although I noticed no difference in the
> output of:
> mdadm --detail --scan
> 
> 
> I use a monolithic kernel so there is no ramdisk to regenerate. This is
> my first 1.00 array, and the existing 0.9 arrays have never shown this
> issue. I have attempted the same process on a test server with no
> issues, and I can't really think of what to look at next.
> 
> apologies for the long post.
> 

Long posts are good....
However I cannot see in your long post what the actual problem is.
You have given no evidence that anything degrades on boot.
No kernel logs, no "/proc/mdstat immediately after boot"...

More info please.

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux