/dev/nvme0n1p1: MBR Magic : aa55 Partition[0] : 4294967295 sectors at 1 (type ee) /dev/nvme1n1p1: MBR Magic : aa55 Partition[0] : 4294967295 sectors at 1 (type ee) /dev/nvme2n1p1: MBR Magic : aa55 Partition[0] : 4294967295 sectors at 1 (type ee) The above output seems to indicate a gpt partition table on nvme2n1p1 if I am reading that right, and that is on what should be the md devices. On Fri, Aug 9, 2024 at 7:25â?¯PM Ryan England <ryan.england@xxxxxxxxxxx> wrote: > > Hello Pascal, > > Thank you for the update. I appreciate you contributing to the conversation. > > Why wouldn't you use the entire disk? What are the risks? I've seen mixed info on this. Some use the entire disk and others use partitions. > > You also mentioned using wipefs to wipe the metadata. Would you run the following: > - wipefs -a /dev/nvme0n1* > - etc > > Regards, > Ryan E. > > > On August 9, 2024 6:28:10 PM EDT, Pascal Hambourg <pascal@xxxxxxxxxxxxxxx> wrote: >> >> On 09/08/2024 at 23:36, Ryan England wrote: >>> >>> >>> I was able to set some time aside to work on the system today. I used >>> parted to remove the partitions. >>> >>> Once the partitions were removed, I created the array as RAID5 using >>> /dev/nvme0n1, /dev/nvme1n1, and /dev/nvme2n1. Including my commands >>> below: >>> - parted /dev/nvme0n1 - print, rm 1, quit >>> - parted /dev/nvme1n1 - print, rm 1, quit >>> - parted /dev/nvme2n1 - print, rm 1, quit >> >> >> If you are going to use whole (unpartitioned) drives as RAID members (which I do not recommend), then you must not only remove the partitions but all partition table metadata. wipefs comes in handy. Else some parts of your system may be confused by the remaining partition table metadata and even "restore" the primary GPT partition table from the backup partition table, overwriting RAID metadata.