Hello Chris, Am Freitag, 21. September 2012, 16:43:52 schrieb Chris Murphy: > On Sep 21, 2012, at 1:35 PM, Chris Murphy wrote: > > If you're making the RAID with that, it defaults to metadata version 1.2. > > But to be sure mdadm -E /dev/mdX > > Scratch that. I was confused. Try these instead: This is the output from a openSUSE 12.2 (DX58SO2) > mdadm -–detail-platform Platform : Intel(R) Matrix Storage Manager Version : 11.0.0.1339 RAID Levels : raid0 raid1 raid10 raid5 Chunk Sizes : 4k 8k 16k 32k 64k 128k 2TB volumes : supported 2TB disks : supported Max Disks : 6 Max Volumes : 2 per array, 4 per controller I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2 (SATA) > mdadm –D /dev/md/imsm On my system I have a /imsm0 /dev/md/imsm0: Version : imsm Raid Level : container Total Devices : 2 Working Devices : 2 UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Member Arrays : /dev/md/Volume0 Number Major Minor RaidDevice 0 8 0 - /dev/sda 1 8 16 - /dev/sdb > mdadm –E /dev/sdX /dev/sda: Magic : Intel Raid ISM Cfg Sig. Version : 1.1.00 Orig Family : e3958f4b Family : e3958f4b Generation : 00013417 Attributes : All supported UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Checksum : 3e9e527c correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk00 Serial : 6QF4WDE3 State : active Id : 00000000 Usable Size : 625137928 (298.09 GiB 320.07 GB) [Volume0]: UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc RAID Level : 1 Members : 2 Slots : [UU] Failed disk : none This Slot : 0 Array Size : 625137664 (298.09 GiB 320.07 GB) Per Dev Size : 625137928 (298.09 GiB 320.07 GB) Sector Offset : 0 Num Stripes : 2441944 Chunk Size : 64 KiB Reserved : 0 Migrate State : idle Map State : normal Dirty State : dirty Disk01 Serial : 6QF4WF5Z State : active Id : 00000001 Usable Size : 625137928 (298.09 GiB 320.07 GB) /dev/sdb: Magic : Intel Raid ISM Cfg Sig. Version : 1.1.00 Orig Family : e3958f4b Family : e3958f4b Generation : 0001342f Attributes : All supported UUID : 363f146f:e7f29dc8:f05996c3:577ead6a Checksum : 3e9e5294 correct MPB Sectors : 1 Disks : 2 RAID Devices : 1 Disk01 Serial : 6QF4WF5Z State : active Id : 00000001 Usable Size : 625137928 (298.09 GiB 320.07 GB) [Volume0]: UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc RAID Level : 1 Members : 2 Slots : [UU] Failed disk : none This Slot : 1 Array Size : 625137664 (298.09 GiB 320.07 GB) Per Dev Size : 625137928 (298.09 GiB 320.07 GB) Sector Offset : 0 Num Stripes : 2441944 Chunk Size : 64 KiB Reserved : 0 Migrate State : idle Map State : normal Dirty State : dirty Disk00 Serial : 6QF4WDE3 State : active Id : 00000000 Usable Size : 625137928 (298.09 GiB 320.07 GB) > Anyway, I'm suspicious that you've got either your SATA controller also with > RAID enabled, or you're also using dmraid and it's conflicting with the md > driver. Or you've misconfigured mdadm for imsm. The result is the secondary > GPT is getting squished. I think you should read this document, as it > proposes creating a container first, then RAID within that. If you're > creating the RAID entirely from within Windows this may not be what it > does. > > http://download.intel.com/design/intarch/PAPERS/326024.pdf I working on this, to read all. I installed with YaST2 and I hope it is only mdadm not all together ;). But I mean I have read in the changelog from parted 3.1, it is a Raid(1) GPT Error Bug corrected ? I hope I can create a working mdadm 3.1 package witch is working with openSUSE 12.2. (I am not a programmer) Thanks for the hint to fedora, for the tool gdisk I don't found it before. -- mit freundlichen Grüßen / best Regards. Günther J. Niederwimmer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html