[root@localhost guest]# dmraid -b /dev/sda: 312579695 total, "PVF904Z21DEXXN" /dev/sdb: 312581808 total, "PVF904Z21HYVDN" /dev/sdc: 312581808 total, "3JS39NKF" The third disk is not a raid member but I think is checked for the raid1 rebuild. I don't understand why the sector numbers are different in the first two disks...they are the same model. If the problem is not only the metadata, do I have to install windows to the third disk to rebuild or is there a boot cd/usb solution? It should exist... This is the first time I use raid and I didn't expect frequent failures. Is there a way to prevent them? [root@localhost guest]# dmraid -n /dev/sda (isw): 0x000 sig: " Intel Raid ISM Cfg Sig. 1.2.00" 0x020 check_sum: 3197955174 0x024 mpb_size: 808 0x028 family_num: 4250497878 0x02c generation_num: 943 0x030 reserved[0]: 4080 0x034 reserved[1]: 2147483648 0x038 num_disks: 3 0x039 num_raid_devs: 2 0x03a fill[0]: 2 0x03b fill[1]: 0 0x040 filler[1]: 1125244694 0x0d8 disk[0].serial: " PVF904Z21DEXXN" 0x0e8 disk[0].totalBlocks: 312579695 0x0ec disk[0].scsiId: 0x20000 0x0f0 disk[0].status: 0x53a 0x108 disk[1].serial: " 3JS39NKF" 0x118 disk[1].totalBlocks: 312581808 0x11c disk[1].scsiId: 0x50000 0x120 disk[1].status: 0x53a 0x138 disk[2].serial: "PVF904Z21HYVDN:0" 0x148 disk[2].totalBlocks: 312581632 0x14c disk[2].scsiId: 0xffffffff 0x150 disk[2].status: 0x2 0x168 isw_dev[0].volume: " RAID0" 0x17c isw_dev[0].SizeHigh: 0 0x178 isw_dev[0].SizeLow: 167772160 0x180 isw_dev[0].status: 0xc 0x184 isw_dev[0].reserved_blocks: 0 0x1c0 isw_dev[0].vol.migr_state: 1 0x1c1 isw_dev[0].vol.migr_type: 1 0x1c2 isw_dev[0].vol.dirty: 0 0x1c3 isw_dev[0].vol.fill[0]: 0 0x1d8 isw_dev[0].vol.map.pba_of_lba0: 0 0x1dc isw_dev[0].vol.map.blocks_per_member: 83886344 0x1e0 isw_dev[0].vol.map.num_data_stripes: 327680 0x1e4 isw_dev[0].vol.map.blocks_per_strip: 256 0x1e6 isw_dev[0].vol.map.map_state: 0 0x1e7 isw_dev[0].vol.map.raid_level: 0 0x1e8 isw_dev[0].vol.map.num_members: 2 0x1e9 isw_dev[0].vol.map.reserved[0]: 1 0x1ea isw_dev[0].vol.map.reserved[1]: 1 0x1eb isw_dev[0].vol.map.reserved[2]: 1 0x208 isw_dev[0].vol.map.disk_ord_tbl[0]: 0x0 0x20c isw_dev[0].vol.map.disk_ord_tbl[1]: 0x1 0x210 isw_dev[0].vol.map.disk_ord_tbl[2]: 0x0 0x248 isw_dev[1].volume: " RAID1" 0x25c isw_dev[1].SizeHigh: 0 0x258 isw_dev[1].SizeLow: 228683776 0x260 isw_dev[1].status: 0xc 0x264 isw_dev[1].reserved_blocks: 0 0x2a0 isw_dev[1].vol.migr_state: 1 0x2a1 isw_dev[1].vol.migr_type: 1 0x2a2 isw_dev[1].vol.dirty: 0 0x2a3 isw_dev[1].vol.fill[0]: 0 0x2b8 isw_dev[1].vol.map.pba_of_lba0: 83890440 0x2bc isw_dev[1].vol.map.blocks_per_member: 228684040 0x2c0 isw_dev[1].vol.map.num_data_stripes: 893296 0x2c4 isw_dev[1].vol.map.blocks_per_strip: 128 0x2c6 isw_dev[1].vol.map.map_state: 0 0x2c7 isw_dev[1].vol.map.raid_level: 1 0x2c8 isw_dev[1].vol.map.num_members: 2 0x2c9 isw_dev[1].vol.map.reserved[0]: 2 0x2ca isw_dev[1].vol.map.reserved[1]: 1 0x2cb isw_dev[1].vol.map.reserved[2]: 1 0x2e8 isw_dev[1].vol.map.disk_ord_tbl[0]: 0x0 0x2ec isw_dev[1].vol.map.disk_ord_tbl[1]: 0x1 0x2f0 isw_dev[1].vol.map.disk_ord_tbl[2]: 0x5001108 /dev/sdb (isw): 0x000 sig: " Intel Raid ISM Cfg Sig. 1.2.00" 0x020 check_sum: 1183764248 0x024 mpb_size: 752 0x028 family_num: 2765561750 0x02c generation_num: 940 0x030 reserved[0]: 4080 0x034 reserved[1]: 2147483648 0x038 num_disks: 3 0x039 num_raid_devs: 2 0x03a fill[0]: 2 0x03b fill[1]: 0 0x040 filler[1]: 1125244694 0x0d8 disk[0].serial: "PVF904Z21DEXXN:1" 0x0e8 disk[0].totalBlocks: 312581632 0x0ec disk[0].scsiId: 0xffffffff 0x0f0 disk[0].status: 0x2 0x108 disk[1].serial: " PVF904Z21HYVDN" 0x118 disk[1].totalBlocks: 312581808 0x11c disk[1].scsiId: 0x30000 0x120 disk[1].status: 0x53a 0x138 disk[2].serial: "PVF904Z21DEXXN:0" 0x148 disk[2].totalBlocks: 312579584 0x14c disk[2].scsiId: 0xffffffff 0x150 disk[2].status: 0x2 0x168 isw_dev[0].volume: " RAID0:1" 0x17c isw_dev[0].SizeHigh: 0 0x178 isw_dev[0].SizeLow: 167772160 0x180 isw_dev[0].status: 0xc 0x184 isw_dev[0].reserved_blocks: 0 0x1c0 isw_dev[0].vol.migr_state: 1 0x1c1 isw_dev[0].vol.migr_type: 1 0x1c2 isw_dev[0].vol.dirty: 0 0x1c3 isw_dev[0].vol.fill[0]: 0 0x1d8 isw_dev[0].vol.map.pba_of_lba0: 0 0x1dc isw_dev[0].vol.map.blocks_per_member: 83886344 0x1e0 isw_dev[0].vol.map.num_data_stripes: 327680 0x1e4 isw_dev[0].vol.map.blocks_per_strip: 256 0x1e6 isw_dev[0].vol.map.map_state: 3 0x1e7 isw_dev[0].vol.map.raid_level: 0 0x1e8 isw_dev[0].vol.map.num_members: 2 0x1e9 isw_dev[0].vol.map.reserved[0]: 1 0x1eb isw_dev[0].vol.map.reserved[2]: 1 0x208 isw_dev[0].vol.map.disk_ord_tbl[0]: 0x1000000 0x20c isw_dev[0].vol.map.disk_ord_tbl[1]: 0x1 0x210 isw_dev[0].vol.map.disk_ord_tbl[2]: 0x0 0x248 isw_dev[1].volume: " RAID1:1" 0x25c isw_dev[1].SizeHigh: 0 0x258 isw_dev[1].SizeLow: 228683776 0x260 isw_dev[1].status: 0xc 0x264 isw_dev[1].reserved_blocks: 0 0x2a0 isw_dev[1].vol.migr_state: 0 0x2a1 isw_dev[1].vol.migr_type: 1 0x2a2 isw_dev[1].vol.dirty: 0 0x2a3 isw_dev[1].vol.fill[0]: 0 0x2b8 isw_dev[1].vol.map.pba_of_lba0: 83890440 0x2bc isw_dev[1].vol.map.blocks_per_member: 228684040 0x2c0 isw_dev[1].vol.map.num_data_stripes: 893296 0x2c4 isw_dev[1].vol.map.blocks_per_strip: 128 0x2c6 isw_dev[1].vol.map.map_state: 2 0x2c7 isw_dev[1].vol.map.raid_level: 1 0x2c8 isw_dev[1].vol.map.num_members: 2 0x2c9 isw_dev[1].vol.map.reserved[0]: 2 0x2cb isw_dev[1].vol.map.reserved[2]: 1 0x2e8 isw_dev[1].vol.map.disk_ord_tbl[0]: 0x1000000 0x2ec isw_dev[1].vol.map.disk_ord_tbl[1]: 0x1 0x2f0 isw_dev[1].vol.map.disk_ord_tbl[2]: 0x5001108 /dev/sdc (isw): 0x000 sig: " Intel Raid ISM Cfg Sig. 1.2.00" 0x020 check_sum: 3197955174 0x024 mpb_size: 808 0x028 family_num: 4250497878 0x02c generation_num: 943 0x030 reserved[0]: 4080 0x034 reserved[1]: 2147483648 0x038 num_disks: 3 0x039 num_raid_devs: 2 0x03a fill[0]: 2 0x03b fill[1]: 0 0x040 filler[1]: 1125244694 0x0d8 disk[0].serial: " PVF904Z21DEXXN" 0x0e8 disk[0].totalBlocks: 312579695 0x0ec disk[0].scsiId: 0x20000 0x0f0 disk[0].status: 0x53a 0x108 disk[1].serial: " 3JS39NKF" 0x118 disk[1].totalBlocks: 312581808 0x11c disk[1].scsiId: 0x50000 0x120 disk[1].status: 0x53a 0x138 disk[2].serial: "PVF904Z21HYVDN:0" 0x148 disk[2].totalBlocks: 312581632 0x14c disk[2].scsiId: 0xffffffff 0x150 disk[2].status: 0x2 0x168 isw_dev[0].volume: " RAID0" 0x17c isw_dev[0].SizeHigh: 0 0x178 isw_dev[0].SizeLow: 167772160 0x180 isw_dev[0].status: 0xc 0x184 isw_dev[0].reserved_blocks: 0 0x1c0 isw_dev[0].vol.migr_state: 1 0x1c1 isw_dev[0].vol.migr_type: 1 0x1c2 isw_dev[0].vol.dirty: 0 0x1c3 isw_dev[0].vol.fill[0]: 0 0x1d8 isw_dev[0].vol.map.pba_of_lba0: 0 0x1dc isw_dev[0].vol.map.blocks_per_member: 83886344 0x1e0 isw_dev[0].vol.map.num_data_stripes: 327680 0x1e4 isw_dev[0].vol.map.blocks_per_strip: 256 0x1e6 isw_dev[0].vol.map.map_state: 0 0x1e7 isw_dev[0].vol.map.raid_level: 0 0x1e8 isw_dev[0].vol.map.num_members: 2 0x1e9 isw_dev[0].vol.map.reserved[0]: 1 0x1ea isw_dev[0].vol.map.reserved[1]: 1 0x1eb isw_dev[0].vol.map.reserved[2]: 1 0x208 isw_dev[0].vol.map.disk_ord_tbl[0]: 0x0 0x20c isw_dev[0].vol.map.disk_ord_tbl[1]: 0x1 0x210 isw_dev[0].vol.map.disk_ord_tbl[2]: 0x0 0x248 isw_dev[1].volume: " RAID1" 0x25c isw_dev[1].SizeHigh: 0 0x258 isw_dev[1].SizeLow: 228683776 0x260 isw_dev[1].status: 0xc 0x264 isw_dev[1].reserved_blocks: 0 0x2a0 isw_dev[1].vol.migr_state: 1 0x2a1 isw_dev[1].vol.migr_type: 1 0x2a2 isw_dev[1].vol.dirty: 0 0x2a3 isw_dev[1].vol.fill[0]: 0 0x2b8 isw_dev[1].vol.map.pba_of_lba0: 83890440 0x2bc isw_dev[1].vol.map.blocks_per_member: 228684040 0x2c0 isw_dev[1].vol.map.num_data_stripes: 893296 0x2c4 isw_dev[1].vol.map.blocks_per_strip: 128 0x2c6 isw_dev[1].vol.map.map_state: 0 0x2c7 isw_dev[1].vol.map.raid_level: 1 0x2c8 isw_dev[1].vol.map.num_members: 2 0x2c9 isw_dev[1].vol.map.reserved[0]: 2 0x2ca isw_dev[1].vol.map.reserved[1]: 1 0x2cb isw_dev[1].vol.map.reserved[2]: 1 0x2e8 isw_dev[1].vol.map.disk_ord_tbl[0]: 0x0 0x2ec isw_dev[1].vol.map.disk_ord_tbl[1]: 0x1 0x2f0 isw_dev[1].vol.map.disk_ord_tbl[2]: 0x5001108 On 9/27/07, Fang, Ying <ying.fang@xxxxxxxxx> wrote: > Hi Tiago, > > I'd like to investigate why OROM misunderstood the ISW metadata written > by iMSM. From the below info, OROM failed to detect one hard drive in > the array. > > Could you run "dmraid -b" and "dmraid -n" on your system? The first one > shows the current hard drive info and the latter displays the metadata > on disks. Those may give me a clue which hard drive caused the hiccup. > > I try to repair your metadata if possible. Please wait for a short time. > > Thanks, > Ying > >------------------------------ > > > >Message: 3 > >Date: Wed, 26 Sep 2007 19:34:30 +0100 > >From: "Tiago Freitas" <tiago.frt@xxxxxxxxx> > >Subject: Re: isw device for volume broken after opensuse livecd boot > >To: "ATARAID (eg, Promise Fasttrak, Highpoint 370) related > > discussions" <ataraid-list@xxxxxxxxxx> > >Message-ID: > > <79af7a390709261134y4e4f6bb0w5ad080beed4b6f25@xxxxxxxxxxxxxx> > >Content-Type: text/plain; charset=ISO-8859-1 > > > >Ok, now it's even stanger. I changed the sata cables to other sata > >ports and now the matrix storage manager says I have 4 volumes: > > > >0 RAID0:1 80Gb Failed > >1 RAID1:1 109.0Gb Degraded > >2 RAID0 80Gb Failed > >3 RAID1 109.0Gb Degraded > > > >Port > >0 Hitachi 149.1GB Member Disk(0,1) > >1 Hitachi 149.1GB Member Disk(2,3) > > > >While researching I found that with that this "offline member" error > >happens a lot to intel matrix raid users. Some just a few hours after > >setup because of a bad shutdown. What does this mean? Is a UPS needed > >for matrix raid? > > > >If someone can use the metadata to investigate this, please ask me for > >it until I redo the arrays. > > > > _______________________________________________ > Ataraid-list mailing list > Ataraid-list@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/ataraid-list > _______________________________________________ Ataraid-list mailing list Ataraid-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ataraid-list