Re: RAID missing post reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Pascal,

Thank you for the update. I appreciate you contributing to the conversation.

Why wouldn't you use the entire disk? What are the risks? I've seen mixed info on this. Some use the entire disk and others use partitions.

You also mentioned using wipefs to wipe the metadata. Would you run the following:
- wipefs -a /dev/nvme0n1*
- etc

Regards,
Ryan E.

On August 9, 2024 6:28:10 PM EDT, Pascal Hambourg <pascal@xxxxxxxxxxxxxxx> wrote:
>On 09/08/2024 at 23:36, Ryan England wrote:
>> 
>> I was able to set some time aside to work on the system today. I used
>> parted to remove the partitions.
>> 
>> Once the partitions were removed, I created the array as RAID5 using
>> /dev/nvme0n1, /dev/nvme1n1, and /dev/nvme2n1. Including my commands
>> below:
>> - parted /dev/nvme0n1 - print, rm 1, quit
>> - parted /dev/nvme1n1 - print, rm 1, quit
>> - parted /dev/nvme2n1 - print, rm 1, quit
>
>If you are going to use whole (unpartitioned) drives as RAID members (which I do not recommend), then you must not only remove the partitions but all partition table metadata. wipefs comes in handy. Else some parts of your system may be confused by the remaining partition table metadata and even "restore" the primary GPT partition table from the backup partition table, overwriting RAID metadata.





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux