F29 Raid 5-Luks-LVM malfunction

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, I have a 3 disk RAID 5 where one disk broke (/dev/sdb). /dev/sda1 has /boot, /dev/sda2 is a RAID partition, /dev/sdb1 is a RAID partition, /dev/sdc1 is a RAID partition. In the process of moving disks around to recover two unfortunate things occurred. The first is that my system stopped recognizing the second disk (the one that had no /boot on it) which I have subsequently added back into the array, and the second is that my system in emergency mode doesn't have cryptsetup for opening the luks container and will not recognize my USB ports. How do I get the system to open my disk up as before?

Attempting to boot gives, via journalctl:
"systemd-udevd[434]: Process '/sbin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1." and
"systemd-udevd[426]: Process '/sbin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1." and
"systemd-udevd[434]: inotify_add_watch(8, /dev/sdc1, 10) failed: No such file or directory" and
"systemd-udevd[426]: Process '/sbin/mdadm  -I /dev/sdb' failed with exit code 2." and
"systemd-udevd[434]: inotify_add_watch(8, /dev/sdb1, 10) failed: No such file or directory"

Apparently the boot sequence is pre-emptively deleting sdb and sdc partitions from /dev. When I execute:
"partx -a --nr 1-1024 /dev/sdb" and "partx -a --nr 1-1024 /deb/sdc" it re-creates their first partitions device nodes.
Then I edit mdadm.conf (which is apparently automatically written by anaconda each time I try to boot) to read:
"DEVICE /dev/sda2 /dev/sdb1 /dev/sdc1
MAILADDR root
HOMEHOST <system>
ARRAY /dev/md127 level=5 devices=/dev/sda2,/dev/sdb1,/dev/sdc1 metadata=1.2 UUID=f6224251:0ba59f55:05d9cdf4:98e79eca"
so I can execute:
"mdadm --stop /dev/md127" to stop the array with 1 disk and
"mdadm --assemble /dev/md127" to start the array with all 3 disks

RAID is recognized but now I'm stuck. What next for opening luks and then lvm? Thanks.
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx
[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux