Hi there, After attempting to add a GPU to a VM running on a CentOS 7 KVM host I have, the machine forcibly rebooted. Upon reboot, my /dev/md0 raid 6 XFS array would not start. Background: Approximately 3 weeks ago I added 3 additional 3TB HDD's to my existing 5 disk array, and grew it using the *raw* disks as opposed to the partitions. Everything appeared to be working fine (raw disk was my mistake, as it had been a year since I had expanded this array previously, simply forgot steps) until last night. WHen I added the GPU via VMM, the host itself rebooted. Unfortunately, the machine has no network access at the moment and I can only provide pictures of text from whats displayed on the screen. The system is booting into emergency mode and its failing because the /dev/md0 array isn't starting (and then NFS fails, etc). Smartctl shows no errors with any of the disks, and mdadm examine shows no superblocks on the 3 disks I added before. The array is in the inactive state, and it shows only 5 disks. To add to that, apparently I had grown the cluster while SELinux had been enabled as opposed to permissive - so there was a audit log of mdadm trying to modify /etc/mdadm.conf. I'm guessing it was trying to update the configuration file as to the drive configuration. Smartctl shows each drive is fine, and the first 5 drives have equal numbers of events. I'm presuming the data is all still intact. Any advice on how to proceed? Thanks! -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html