Hi all, after we made a backup to other hdds we recovered the raid with following command: mdadm --create --assume-clean /dev/md2 --level=6 --raid-devices=24 /dev/sda /dev/sd[d-z] xfs made a journal check and then the files were back again. Best regards, Volker > Am 24.04.2019 um 11:34 schrieb Volker Lieder <v.lieder@xxxxxxxxxx>: > > Hi Andreas, > > first setup was try with zfs, perhaps partition scheme was not deleted perfect. > > The customer ordered a server like this one. We will backup all disk via dd first and then try different other things. > > Thank you > >> Am 23.04.2019 um 17:29 schrieb Andreas Klauer <Andreas.Klauer@xxxxxxxxxxxxxx>: >> >> On Tue, Apr 23, 2019 at 04:58:06PM +0200, Andreas Klauer wrote: >>>> /dev/sda: >>>> MBR Magic : aa55 >>>> Partition[0] : 4294967295 sectors at 1 (type ee) >>>> mdadm: No md superblock detected on /dev/sda1. >>>> mdadm: No md superblock detected on /dev/sda9. >>>> [ 6.044546] sda: sda1 sda9 >>> >>> The partition numbering is weird too - partition 1, partition 9, >>> nothing in between for all of them. >>> >>> Is there anything on these partitions? (file -s /dev/sd*) >>> Any data written to partitions likely damaged data on the RAID. >> >> Apparently this partition 1 partition 9 scheme is common >> for Solaris ZFS / ZFS on Linux? >> >> https://www.slackwiki.com/ZFS_root >> >>> Currently, when ZFSonLinux is given a whole disk device >>> like /dev/sdb to use, it will automatically GPT partition it >>> with a bf01 partition #1 that is the whole disk except >>> a small 8MiB type bf07 partition #9 at the end of the disk >>> (/dev/sdb1 and /dev/sdb9). >> >> I don't use ZFS myself, so... no idea, sorry. >> I thought you were asking about mdadm RAID. >> >> Regards >> Andreas Klauer >