Hi, I have actually noticed some of those messages during the server startup. They were something like "primary GPT missing but secondary found -> using that". Most likely that other system I used for testing was not just using the secondary but overwrote the tables. After I hopefully get the data readable again I'll create two backups of the data and the old raid will get into bit heaven. Then I might even try to reproduce that wrong setup and maybe play with it a bit to see how things work. I was able to get the order of the devices from old syslog file (smartd) and then created the array again: root@NAS-server:~# mdadm --create --assume-clean --level=6 --raid-devices=6 --size=3906887168 --chunk=512K --data-offset=254976s /dev/md0 /dev/mapper/sdc /dev/mapper/sda /dev/mapper/sde /dev/mapper/sdd /dev/mapper/sdb /dev/mapper/sdf mdadm: partition table exists on /dev/mapper/sdc mdadm: partition table exists on /dev/mapper/sdc but will be lost or meaningless after creating array mdadm: partition table exists on /dev/mapper/sda mdadm: partition table exists on /dev/mapper/sda but will be lost or meaningless after creating array mdadm: partition table exists on /dev/mapper/sde mdadm: partition table exists on /dev/mapper/sde but will be lost or meaningless after creating array mdadm: partition table exists on /dev/mapper/sdd mdadm: partition table exists on /dev/mapper/sdd but will be lost or meaningless after creating array mdadm: partition table exists on /dev/mapper/sdb mdadm: partition table exists on /dev/mapper/sdb but will be lost or meaningless after creating array mdadm: /dev/mapper/sdf appears to be part of a raid array: level=raid6 devices=6 ctime=Thu May 18 22:56:47 2017 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. Running fsck caused so many errors that the mounted ext4 was empty. I reset the overlay array and now I'm running analyze with testdisk. It will take a long time Thanks for the help. Best Regards, Topi On Fri, 1 Mar 2024 at 20:46, Pascal Hambourg <pascal@xxxxxxxxxxxxxxx> wrote: > > On 01/03/2024 at 17:27, Roger Heflin wrote: > > > > Do "fdisk -l /dev/sd[a-h]", given 4tb devices they are probably GPT partitions. > > Not "probably". "type ee" means GPT protective MBR. > > > Do not recreate the array, to do that you must have the correct device > > order and all other parameters for the raid correct. > > > > You will also need to determine how/what created the partitions. > > There are reports that some motherboards will "fix" disks without a > > partition table. if you dual boot into windows I believe it also > > wants to "fix" it. > > For now there are two competing theories: > a) if the disk has no partition table, then the BIOS creates a new > partition table; > b) if the disk has a backup GPT partition table but a missing or > corrupted primary GPT partition table, then the BIOS restores the > primary partition table from the backup partition table. > > Theory a) implies that even if you manage to re-create the RAID > superblocks, they will be overwritten again at next boot. Your options are: > - back-up the data before the next boot, re-create the RAID array in > partitions instead of whole disks and restore tha data; > - or back-up the data before the next boot and re-create the RAID array > in partitions with --data-offset value set so that the data area remains > at the same disk offset. > > Theory b) implies that if you manage to re-create the RAID superblocks, > they will be overwritten again at next boot unless you also erase the > protective MBR and primary and backup GPT partition table signatures > with wipefs. > > > You should likely also read the last 2-4 weeks of this group's > > archive. Another guy with a very similar partition table accident > > recovered his array and posted some about the recovery steps he > > needed. > > The discussion subject was "Requesting help recovering my array" and > started in January. > <https://marc.info/?t=170595323800003&r=1&w=2> >