On Fri, 2020-06-05 at 14:58 -0700, Samuel Sieb wrote: > On 6/5/20 2:30 PM, Patrick O'Callaghan wrote: > > On Fri, 2020-06-05 at 13:02 -0700, Samuel Sieb wrote: > > > You don't need an mdadm.conf file or anything. The mdraid system will > > > automatically build an array when it sees the drives appear. And if you > > > are using UUIDs in any mount descriptions, that will automatically work > > > as well. > > > > That doesn't seem to be what's happening: > > > > 1) Starting from a fresh reboot, with the array unmounted but active > > according to mdadm, I make it inactive: > > > > # echo inactive > /sys/block/md127/md/array_state > > That's not the correct option. From the kernel docs: > When written, doesn’t tear down array, but just stops it > > I believe that "clear" would be the right option. What I needed was 'mdadm --stop', as you point out below. > > (At this point I can make it active again using "echo active ...") > > That shows that you haven't properly stopped it, it's still configured. > > > 2) I now delete the component drives: > > > > # echo 1 > /sys/block/sdd/device/delete > > # echo 1 > /sys/block/sde/device/delete > > That will make the raid array very unhappy. > > > It's very possible (indeed likely) that I'm stopping the array in the > > wrong way, but I don't see any other way to do it. The mdadm man page > > mentions '-A' as the way to start an array, but doesn't talk about how > > to stop it, so it could just be leaving out-of-date status information > > around and that's what's confusing it. > > From the mdadm man page: > -S, --stop > deactivate array, releasing all resources. Yes. Can't think why I didn't notice that before. Thanks poc _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx