Re: lost software raid information

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Andreas,

first setup was try with zfs, perhaps partition scheme was not deleted perfect.

The customer ordered a server like this one. We will backup all disk via dd first and then try different other things.

Thank you

> Am 23.04.2019 um 17:29 schrieb Andreas Klauer <Andreas.Klauer@xxxxxxxxxxxxxx>:
> 
> On Tue, Apr 23, 2019 at 04:58:06PM +0200, Andreas Klauer wrote:
>>> /dev/sda:
>>>   MBR Magic : aa55
>>> Partition[0] :   4294967295 sectors at            1 (type ee)
>>> mdadm: No md superblock detected on /dev/sda1.
>>> mdadm: No md superblock detected on /dev/sda9.
>>> [    6.044546]  sda: sda1 sda9
>> 
>> The partition numbering is weird too - partition 1, partition 9, 
>> nothing in between for all of them.
>> 
>> Is there anything on these partitions? (file -s /dev/sd*)
>> Any data written to partitions likely damaged data on the RAID.
> 
> Apparently this partition 1 partition 9 scheme is common 
> for Solaris ZFS / ZFS on Linux?
> 
> https://www.slackwiki.com/ZFS_root
> 
>> Currently, when ZFSonLinux is given a whole disk device 
>> like /dev/sdb to use, it will automatically GPT partition it 
>> with a bf01 partition #1 that is the whole disk except 
>> a small 8MiB type bf07 partition #9 at the end of the disk
>> (/dev/sdb1 and /dev/sdb9).
> 
> I don't use ZFS myself, so... no idea, sorry. 
> I thought you were asking about mdadm RAID.
> 
> Regards
> Andreas Klauer





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux