Re: F10 install - RAID - nightmare (Solved)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Robin Laing wrote:
Hello,

The system is at home and so are all my notes.

Since I first started using RAID arrays, this is the first time I have had problems with an install. I have been fighting this for over a week. The machine was running F7 with RAID arrays.

I first tried to install F10 using a DVD that was checked by both sha1sum and disk check on install including the RAID array.

The install is working without the RAID array.

After installing on the non-RAID drive, I started going through the install to get the RAID working.

After much reading I found out that due to the problem install, I had to zero the Superblocks. I did this and ensured that there was no superblock data with mdadm --examine {partitions}.

Recreated the multiple RAID partitions.

I am using a 1.5T drive partitions into 8 usable partitions.

I created the 8 partitions using mdadm.

I created /etc/mdadm/mdadm.conf with mdadm --examine --scan as per the man page.

I am providing this in a hope that it will help someone either today or in the future. Someone else success helped me. After 10 days I can say I have a working F10 installation. Hey, 10 for 10. :)

To solve the issue I did a full re-install without the RAID array. I have read reports about anaconda having issues with RAID arrays. After making sure that the install was working well I started playing with the RAID.

With no /etc/mdadm.conf, the system scanned and created inactive arrays.

md_d9 : inactive sdc9[0](S)
      615723136 blocks

md_d8 : inactive sdc8[0](S)
      104864192 blocks

md_d7 : inactive sdc7[0](S)
      73408896 blocks

md_d6 : inactive sdc6[0](S)
      73408896 blocks

md_d5 : inactive sdc5[0](S)
      73408896 blocks

md_d3 : inactive sdc3[0](S)
      209728448 blocks

md_d2 : inactive sdc2[0](S)
      209728448 blocks

md_d1 : inactive sdc1[0](S)
      104864192 blocks

I created a new /etc/mdadm.conf file with the two drives in it like this.
  DEVICE /dev/sdb* /dev/sdc*

I then scanned the drives by using
  mdadm --examine --scan
ARRAY /dev/md1 level=raid1 num-devices=2 \ UUID=512ebb9b:05c4c817:22ba247c:074b5b12 ARRAY /dev/md2 level=raid1 num-devices=2 \ UUID=bdd5f629:8788d740:b569c872:71bb0d9f ARRAY /dev/md3 level=raid1 num-devices=2 \ UUID=649f208e:07a19b6b:119481b7:34c39216 ARRAY /dev/md5 level=raid1 num-devices=2 \ UUID=1a428b1f:5b8a7214:e195441f:012ae200 ARRAY /dev/md6 level=raid1 num-devices=2 \ UUID=f222563b:a73aba50:e34cb61b:312f8680 ARRAY /dev/md7 level=raid1 num-devices=2 \ UUID=dc04f2ee:11b76d67:77b1b096:0fea140a ARRAY /dev/md8 level=raid1 num-devices=2 \ UUID=82bbc5d9:f612fb5b:15177e5c:b51a48df ARRAY /dev/md9 level=raid1 num-devices=2 \ UUID=62c32558:310c027c:fdacac45:9b3ade78

I then ran
  mdadm --examine --scan >> /etc/mdadm.conf
as suggested in the mdadm man page.  This added the drives to mdadm.conf

I then ran
  mdadm -As
which found and activated one of the two drives as shown with
  cat /proc/mdstat

[root@eagle2 etc]# cat /proc/mdstat
Personalities : [raid1]
md9 : active raid1 sdb9[1]
      615723136 blocks [2/1] [_U]

md8 : active raid1 sdb8[1]
      104864192 blocks [2/1] [_U]

md7 : active raid1 sdb7[1]
      73408896 blocks [2/1] [_U]

md6 : active raid1 sdb6[1]
      73408896 blocks [2/1] [_U]

md5 : active raid1 sdb5[1]
      73408896 blocks [2/1] [_U]

md3 : active raid1 sdb3[1]
      209728448 blocks [2/1] [_U]

md2 : active raid1 sdb2[1]
      209728448 blocks [2/1] [_U]

md1 : active raid1 sdb1[1]
      104864192 blocks [2/1] [_U]

md_d9 : inactive sdc9[0](S)
      615723136 blocks

md_d8 : inactive sdc8[0](S)
      104864192 blocks

md_d7 : inactive sdc7[0](S)
      73408896 blocks

md_d6 : inactive sdc6[0](S)
      73408896 blocks

md_d5 : inactive sdc5[0](S)
      73408896 blocks

md_d3 : inactive sdc3[0](S)
      209728448 blocks

md_d2 : inactive sdc2[0](S)
      209728448 blocks

unused devices: <none>

I then ran
  mdadm --stop /dev/md_d{x}

to stop all the inactive RAID devices as shown in the /proc/mdstat file.

I tried a reboot and only one of the two drives were starting. More reading of bug reports and came across a discussion on adding
  auto=md
to each line of the mdadm.conf file for each raid array.

Old

ARRAY /dev/md1 level=raid1 num-devices=2 \ UUID=512ebb9b:05c4c817:22ba247c:074b5b12

New

ARRAY /dev/md1 level=raid1 auto=md num-devices=2 \ UUID=512ebb9b:05c4c817:22ba247c:074b5b12

Now running
  mdadm -As
gives this nice message.

mdadm: /dev/md1 has been started with 2 drives.
mdadm: /dev/md2 has been started with 2 drives.
mdadm: /dev/md3 has been started with 2 drives.
mdadm: /dev/md5 has been started with 2 drives.
mdadm: /dev/md6 has been started with 2 drives.
mdadm: /dev/md7 has been started with 2 drives.
mdadm: /dev/md8 has been started with 2 drives.
mdadm: /dev/md9 has been started with 2 drives.

Confirmed by

[root@eagle2 etc]# cat /proc/mdstat
Personalities : [raid1]
md9 : active raid1 sdc9[0] sdb9[1]
      615723136 blocks [2/2] [UU]

md8 : active raid1 sdc8[0] sdb8[1]
      104864192 blocks [2/2] [UU]

md7 : active raid1 sdc7[0] sdb7[1]
      73408896 blocks [2/2] [UU]

md6 : active raid1 sdc6[0] sdb6[1]
      73408896 blocks [2/2] [UU]

md5 : active raid1 sdc5[0] sdb5[1]
      73408896 blocks [2/2] [UU]

md3 : active raid1 sdc3[0] sdb3[1]
      209728448 blocks [2/2] [UU]

md2 : active raid1 sdc2[0] sdb2[1]
      209728448 blocks [2/2] [UU]

md1 : active raid1 sdc1[0] sdb1[1]
      104864192 blocks [2/2] [UU]

unused devices: <none>

And it works with a reboot.

--
Robin Laing

--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora Magazine]     [Fedora News]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [SSH]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux