RE: DMRAID+Intel P35 (ICH9R)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Glad to hear things are working for you now.

The "generation_num" gets incremented when the Metadata is written to.
The BIOS OROM is updating the metadata in your scenario.  You may see
the generation_num change on reboot.  Also, if you go into the OROM
utility [ctrl-I], this would increment the generation_num.

Jason
 

>-----Original Message-----
>From: mFuSE@xxxxxxx [mailto:mFuSE@xxxxxxx] 
>Sent: Wednesday, July 18, 2007 6:35 PM
>To: Gaston, Jason D
>Cc: mauelshagen@xxxxxxxxxx; ATARAID (eg, Promise Fasttrak, 
>Highpoint 370) related discussions
>Subject: Re: DMRAID+Intel P35 (ICH9R)
>
>Hello,
>
>actual i mixed the links, the right one should have been this:
>http://62.109.81.232/cgi-bin/sbb/sbb.cgi?&a=show&forum=1&show=3352
>
>Sorry for my late replay, i couldn't test it until now...
>
>
>I can't reproduce the bug anymore.....
>I boot Ubuntu from the Live-DVD - the Raid was still ok.
>Then i boot my installed Ubuntu without any dmraid - and after 
>a reboot the raid was gone.
>
>I create a new Raidarray, boot into Ubuntu and install dmraid 
>- and from now on everything 
>was just fine ...
>Even deleting the old Raid in the raidbios and creating a new 
>one with a different 
>Blocksize was no Problem for the auto detection of dmraid (as 
>it was on my starting 
>message point).
>
>
>So its probably really a hardware issue especially with the 
>Gigabyte P35 Mainboards.
>A lot of people seems to have Problems with the IntelRaid on 
>these mainboards consulting 
>the Gigabyte supportforum. But nobody can say whats wrong...
>
>
>Anyway the debug info from dmraid.
>One question, why does the "generation_num" of the raidarray 
>differ after booting from the 
>Live-DVD?
>
>First-boot-after dmraid installation:
>root@mfuse-pc:/home/mfuse# dmraid -dvr
>INFO: RAID devices discovered:
>
>/dev/sdb: isw, "isw_cgdegbgbhe", GROUP, ok, 586072366 sectors, data@ 0
>/dev/sdc: isw, "isw_cgdegbgbhe", GROUP, ok, 586072366 sectors, data@ 0
>
>root@mfuse-pc:/home/mfuse# dmraid -dvs
>DEBUG: _find_set: searching isw_cgdegbgbhe
>DEBUG: _find_set: not found isw_cgdegbgbhe
>DEBUG: _find_set: searching isw_cgdegbgbhe_testraid
>DEBUG: _find_set: searching isw_cgdegbgbhe_testraid
>DEBUG: _find_set: not found isw_cgdegbgbhe_testraid
>DEBUG: _find_set: not found isw_cgdegbgbhe_testraid
>DEBUG: _find_set: searching isw_cgdegbgbhe
>DEBUG: _find_set: found isw_cgdegbgbhe
>DEBUG: _find_set: searching isw_cgdegbgbhe_testraid
>DEBUG: _find_set: searching isw_cgdegbgbhe_testraid
>DEBUG: _find_set: found isw_cgdegbgbhe_testraid
>DEBUG: _find_set: found isw_cgdegbgbhe_testraid
>DEBUG: checking isw device "/dev/sdb"
>DEBUG: checking isw device "/dev/sdc"
>DEBUG: set status of set "isw_cgdegbgbhe_testraid" to 16
>DEBUG: set status of set "isw_cgdegbgbhe" to 16
>*** Group superset isw_cgdegbgbhe
>--> Active Subset
>name   : isw_cgdegbgbhe_testraid
>size   : 1172134400
>stride : 32
>type   : stripe
>status : ok
>subsets: 0
>devs   : 2
>spares : 0
>DEBUG: freeing devices of RAID set "isw_cgdegbgbhe_testraid"
>DEBUG: freeing device "isw_cgdegbgbhe_testraid", path "/dev/sdb"
>DEBUG: freeing device "isw_cgdegbgbhe_testraid", path "/dev/sdc"
>DEBUG: freeing devices of RAID set "isw_cgdegbgbhe"
>DEBUG: freeing device "isw_cgdegbgbhe", path "/dev/sdb"
>DEBUG: freeing device "isw_cgdegbgbhe", path "/dev/sdc"
>
>
>root@mfuse-pc:/home/mfuse# dmraid -n
>INFO: RAID devices discovered:
>
>/dev/sdb (isw):
>0x000 sig: "  Intel Raid ISM Cfg Sig. 1.0.00"
>0x020 check_sum: 2523663917
>0x024 mpb_size: 480
>0x028 family_num: 2634616174
>0x02c generation_num: 2
>0x030 reserved[0]: 4080
>0x034 reserved[1]: 2147483648
>0x038 num_disks: 2
>0x039 num_raid_devs: 1
>0x03a fill[0]: 0
>0x03b fill[1]: 0
>0x0d8 disk[0].serial: "        5NF1D8TE"
>0x0e8 disk[0].totalBlocks: 586072368
>0x0ec disk[0].scsiId: 0x10000
>0x0f0 disk[0].status: 0x53a
>0x108 disk[1].serial: "        5NF1KHZS"
>0x118 disk[1].totalBlocks: 586072368
>0x11c disk[1].scsiId: 0x20000
>0x120 disk[1].status: 0x53a
>0x138 isw_dev[0].volume: "        testraid"
>0x14c isw_dev[0].SizeHigh: 0
>0x148 isw_dev[0].SizeLow: 1172133888
>0x150 isw_dev[0].status: 0xc
>0x154 isw_dev[0].reserved_blocks: 0
>0x158 isw_dev[0].filler[0]: 65536
>0x190 isw_dev[0].vol.migr_state: 0
>0x191 isw_dev[0].vol.migr_type: 0
>0x192 isw_dev[0].vol.dirty: 0
>0x193 isw_dev[0].vol.fill[0]: 255
>0x1a8 isw_dev[0].vol.map.pba_of_lba0: 0
>0x1ac isw_dev[0].vol.map.blocks_per_member: 586067208
>0x1b0 isw_dev[0].vol.map.num_data_stripes: 18314592
>0x1b4 isw_dev[0].vol.map.blocks_per_strip: 32
>0x1b6 isw_dev[0].vol.map.map_state: 0
>0x1b7 isw_dev[0].vol.map.raid_level: 0
>0x1b8 isw_dev[0].vol.map.num_members: 2
>0x1b9 isw_dev[0].vol.map.reserved[0]: 1
>0x1ba isw_dev[0].vol.map.reserved[1]: 255
>0x1bb isw_dev[0].vol.map.reserved[2]: 1
>0x1d8 isw_dev[0].vol.map.disk_ord_tbl[0]: 0x0
>0x1dc isw_dev[0].vol.map.disk_ord_tbl[1]: 0x1
>
>/dev/sdc (isw):
>0x000 sig: "  Intel Raid ISM Cfg Sig. 1.0.00"
>0x020 check_sum: 2523663917
>0x024 mpb_size: 480
>0x028 family_num: 2634616174
>0x02c generation_num: 2
>0x030 reserved[0]: 4080
>0x034 reserved[1]: 2147483648
>0x038 num_disks: 2
>0x039 num_raid_devs: 1
>0x03a fill[0]: 0
>0x03b fill[1]: 0
>0x0d8 disk[0].serial: "        5NF1D8TE"
>0x0e8 disk[0].totalBlocks: 586072368
>0x0ec disk[0].scsiId: 0x10000
>0x0f0 disk[0].status: 0x53a
>0x108 disk[1].serial: "        5NF1KHZS"
>0x118 disk[1].totalBlocks: 586072368
>0x11c disk[1].scsiId: 0x20000
>0x120 disk[1].status: 0x53a
>0x138 isw_dev[0].volume: "        testraid"
>0x14c isw_dev[0].SizeHigh: 0
>0x148 isw_dev[0].SizeLow: 1172133888
>0x150 isw_dev[0].status: 0xc
>0x154 isw_dev[0].reserved_blocks: 0
>0x158 isw_dev[0].filler[0]: 65536
>0x190 isw_dev[0].vol.migr_state: 0
>0x191 isw_dev[0].vol.migr_type: 0
>0x192 isw_dev[0].vol.dirty: 0
>0x193 isw_dev[0].vol.fill[0]: 255
>0x1a8 isw_dev[0].vol.map.pba_of_lba0: 0
>0x1ac isw_dev[0].vol.map.blocks_per_member: 586067208
>0x1b0 isw_dev[0].vol.map.num_data_stripes: 18314592
>0x1b4 isw_dev[0].vol.map.blocks_per_strip: 32
>0x1b6 isw_dev[0].vol.map.map_state: 0
>0x1b7 isw_dev[0].vol.map.raid_level: 0
>0x1b8 isw_dev[0].vol.map.num_members: 2
>0x1b9 isw_dev[0].vol.map.reserved[0]: 1
>0x1ba isw_dev[0].vol.map.reserved[1]: 255
>0x1bb isw_dev[0].vol.map.reserved[2]: 1
>0x1d8 isw_dev[0].vol.map.disk_ord_tbl[0]: 0x0
>0x1dc isw_dev[0].vol.map.disk_ord_tbl[1]: 0x1
>
>
>
>After Booting from Live cd and back booting the installed system:
>
>root@mfuse-pc:/home/mfuse# dmraid -dvr
>INFO: RAID devices discovered:
>
>/dev/sdb: isw, "isw_cgdegbgbhe", GROUP, ok, 586072366 sectors, data@ 0
>/dev/sdc: isw, "isw_cgdegbgbhe", GROUP, ok, 586072366 sectors, data@ 0
>
>root@mfuse-pc:/home/mfuse# dmraid -dvs
>DEBUG: _find_set: searching isw_cgdegbgbhe
>DEBUG: _find_set: not found isw_cgdegbgbhe
>DEBUG: _find_set: searching isw_cgdegbgbhe_testraid
>DEBUG: _find_set: searching isw_cgdegbgbhe_testraid
>DEBUG: _find_set: not found isw_cgdegbgbhe_testraid
>DEBUG: _find_set: not found isw_cgdegbgbhe_testraid
>DEBUG: _find_set: searching isw_cgdegbgbhe
>DEBUG: _find_set: found isw_cgdegbgbhe
>DEBUG: _find_set: searching isw_cgdegbgbhe_testraid
>DEBUG: _find_set: searching isw_cgdegbgbhe_testraid
>DEBUG: _find_set: found isw_cgdegbgbhe_testraid
>DEBUG: _find_set: found isw_cgdegbgbhe_testraid
>DEBUG: checking isw device "/dev/sdb"
>DEBUG: checking isw device "/dev/sdc"
>DEBUG: set status of set "isw_cgdegbgbhe_testraid" to 16
>DEBUG: set status of set "isw_cgdegbgbhe" to 16
>*** Group superset isw_cgdegbgbhe
>--> Active Subset
>name   : isw_cgdegbgbhe_testraid
>size   : 1172134400
>stride : 32
>type   : stripe
>status : ok
>subsets: 0
>devs   : 2
>spares : 0
>DEBUG: freeing devices of RAID set "isw_cgdegbgbhe_testraid"
>DEBUG: freeing device "isw_cgdegbgbhe_testraid", path "/dev/sdb"
>DEBUG: freeing device "isw_cgdegbgbhe_testraid", path "/dev/sdc"
>DEBUG: freeing devices of RAID set "isw_cgdegbgbhe"
>DEBUG: freeing device "isw_cgdegbgbhe", path "/dev/sdb"
>DEBUG: freeing device "isw_cgdegbgbhe", path "/dev/sdc"
>
>root@mfuse-pc:/home/mfuse# dmraid -n
>/dev/sdb (isw):
>0x000 sig: "  Intel Raid ISM Cfg Sig. 1.0.00"
>0x020 check_sum: 2523663921
>0x024 mpb_size: 480
>0x028 family_num: 2634616174
>0x02c generation_num: 6
>0x030 reserved[0]: 4080
>0x034 reserved[1]: 2147483648
>0x038 num_disks: 2
>0x039 num_raid_devs: 1
>0x03a fill[0]: 0
>0x03b fill[1]: 0
>0x0d8 disk[0].serial: "        5NF1D8TE"
>0x0e8 disk[0].totalBlocks: 586072368
>0x0ec disk[0].scsiId: 0x10000
>0x0f0 disk[0].status: 0x53a
>0x108 disk[1].serial: "        5NF1KHZS"
>0x118 disk[1].totalBlocks: 586072368
>0x11c disk[1].scsiId: 0x20000
>0x120 disk[1].status: 0x53a
>0x138 isw_dev[0].volume: "        testraid"
>0x14c isw_dev[0].SizeHigh: 0
>0x148 isw_dev[0].SizeLow: 1172133888
>0x150 isw_dev[0].status: 0xc
>0x154 isw_dev[0].reserved_blocks: 0
>0x158 isw_dev[0].filler[0]: 65536
>0x190 isw_dev[0].vol.migr_state: 0
>0x191 isw_dev[0].vol.migr_type: 0
>0x192 isw_dev[0].vol.dirty: 0
>0x193 isw_dev[0].vol.fill[0]: 255
>0x1a8 isw_dev[0].vol.map.pba_of_lba0: 0
>0x1ac isw_dev[0].vol.map.blocks_per_member: 586067208
>0x1b0 isw_dev[0].vol.map.num_data_stripes: 18314592
>0x1b4 isw_dev[0].vol.map.blocks_per_strip: 32
>0x1b6 isw_dev[0].vol.map.map_state: 0
>0x1b7 isw_dev[0].vol.map.raid_level: 0
>0x1b8 isw_dev[0].vol.map.num_members: 2
>0x1b9 isw_dev[0].vol.map.reserved[0]: 1
>0x1ba isw_dev[0].vol.map.reserved[1]: 255
>0x1bb isw_dev[0].vol.map.reserved[2]: 1
>0x1d8 isw_dev[0].vol.map.disk_ord_tbl[0]: 0x0
>0x1dc isw_dev[0].vol.map.disk_ord_tbl[1]: 0x1
>
>/dev/sdc (isw):
>0x000 sig: "  Intel Raid ISM Cfg Sig. 1.0.00"
>0x020 check_sum: 2523663921
>0x024 mpb_size: 480
>0x028 family_num: 2634616174
>0x02c generation_num: 6
>0x030 reserved[0]: 4080
>0x034 reserved[1]: 2147483648
>0x038 num_disks: 2
>0x039 num_raid_devs: 1
>0x03a fill[0]: 0
>0x03b fill[1]: 0
>0x0d8 disk[0].serial: "        5NF1D8TE"
>0x0e8 disk[0].totalBlocks: 586072368
>0x0ec disk[0].scsiId: 0x10000
>0x0f0 disk[0].status: 0x53a
>0x108 disk[1].serial: "        5NF1KHZS"
>0x118 disk[1].totalBlocks: 586072368
>0x11c disk[1].scsiId: 0x20000
>0x120 disk[1].status: 0x53a
>0x138 isw_dev[0].volume: "        testraid"
>0x14c isw_dev[0].SizeHigh: 0
>0x148 isw_dev[0].SizeLow: 1172133888
>0x150 isw_dev[0].status: 0xc
>0x154 isw_dev[0].reserved_blocks: 0
>0x158 isw_dev[0].filler[0]: 65536
>0x190 isw_dev[0].vol.migr_state: 0
>0x191 isw_dev[0].vol.migr_type: 0
>0x192 isw_dev[0].vol.dirty: 0
>0x193 isw_dev[0].vol.fill[0]: 255
>0x1a8 isw_dev[0].vol.map.pba_of_lba0: 0
>0x1ac isw_dev[0].vol.map.blocks_per_member: 586067208
>0x1b0 isw_dev[0].vol.map.num_data_stripes: 18314592
>0x1b4 isw_dev[0].vol.map.blocks_per_strip: 32
>0x1b6 isw_dev[0].vol.map.map_state: 0
>0x1b7 isw_dev[0].vol.map.raid_level: 0
>0x1b8 isw_dev[0].vol.map.num_members: 2
>0x1b9 isw_dev[0].vol.map.reserved[0]: 1
>0x1ba isw_dev[0].vol.map.reserved[1]: 255
>0x1bb isw_dev[0].vol.map.reserved[2]: 1
>0x1d8 isw_dev[0].vol.map.disk_ord_tbl[0]: 0x0
>0x1dc isw_dev[0].vol.map.disk_ord_tbl[1]: 0x1
>
>
>
>Regards,
>PIotr Brostovski
>
>Gaston, Jason D schrieb:
>>> -----Original Message-----
>>> From: ataraid-list-bounces@xxxxxxxxxx 
>>> [mailto:ataraid-list-bounces@xxxxxxxxxx] On Behalf Of Heinz 
>Mauelshagen
>>> Sent: Saturday, July 14, 2007 2:13 AM
>>> To: Piotr Brostovski, Levigo Systems
>>> Cc: ataraid-list@xxxxxxxxxx; Mauelshagen@xxxxxxxxxx
>>> Subject: Re: DMRAID+Intel P35 (ICH9R)
>>>
>>> On Fri, Jul 13, 2007 at 03:11:09PM +0200, Piotr Brostovski, 
>>> Levigo Systems wrote:
>>>> Hello,
>>>>
>>>> I'm stuck with a problem which is related to the combination 
>>> of the new 
>>>> Intel ChipSet P35 on the Gigabyte P35C-DS3R and Linux (Ubuntu 7.04 
>>>> Feisty Fawn).
>>>> I try to find the information how DMRAID stores its 
>>> Raidinformation - 
>>>> and where.
>>> See description of dmraid(8).
>>> Various metadata formats are supported (dmraid -l).
>>> Such vendord specific metadata typically sits at the end of 
>>> each componet
>>> device in a RAID set and is vendor specific.
>>> If you are interested in details, look at the .h files below 
>>> lib/format/
>>> in the dmraid source tree.
>>>
>>>> The DMRAID man page sadly don't say anything about it how 
>>> DMRAID works 
>>>> and google finds a lot - but not what I'm exactly looking for ....
>>>> So i try it with this E-Mail ;)
>>>>
>>>> The Problem is, with the Intel P35 system and Linux my 
>raidarray get 
>>>> broken. And i don't know if DMRAID writes anything on the disks?
>>> Not yet.
>>> Enhancements to so in order to eg. replace a broken mirror member,
>>> are in the works.
>>>
>>>> All partitions are read-only mounted - or even not mounted 
>at all - 
>>>> still the raid get corrupted.
>>>>
>>>> With the nForce Chipsets for AMD CPUs dmraid always worked 
>perfectly 
>>>> (except the bug with an old DMRaid Version, but after 
>>> compiling DMRaid 
>>>> 1.0-RC13 everything just run fine) but with the Intel 
>Chipset i even 
>>>> can't say what exactly is the problem ...
>>> Might be a flaw in the isw metadata format handler.
>>> Send your metadata to me for investigation please.
>>>
>>>> It could be possible that it is a hardware fault, but I'm 
>>> not sure about 
>>>> it because with Windows everything seems to work fine.
>>> Well, sounds like a bug in the isw code...
>>>
>>>> Discussions in the Gigabyte support forum doesn't step 
>ahead, in the 
>>>> Ubuntu support forum i haven't even an answer to this problem :(
>>>>
>>>>
>>> http://62.109.81.232/cgi-bin/sbb/sbb.cgi?&a=show&forum=1&show=3
>>> 352&start=
>>>> http://forum.ubuntuusers.de/viewtopic.php?p=814604#814604
>>>>
>>>>
>>>> Maybe you heard about problems with DMRAID and Intel P35, 
>>> possible in 
>>>> particular on the Gigabyte P35 Mainboards?
>>>>
>>>>
>>>> Yours faithfully,
>>>> Piotr Brostovski
>>> -- 
>>>
>>> Regards,
>>> Heinz    -- The LVM Guy --
>>>
>>> *** Software bugs are stupid.
>>>    Nevertheless it needs not so stupid people to solve them ***
>>>
>>> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>>> -=-=-=-=-=-=-=-
>>>
>>> Heinz Mauelshagen                                 Red Hat GmbH
>>> Consulting Development Engineer                   Am Sonnenhang 11
>>> Storage Development                               56242 
>Marienrachdorf
>>>                                                  Germany
>>> Mauelshagen@xxxxxxxxxx                            PHONE +49  
>>> 171 7803392
>>>                                                  FAX   +49 
>2626 924446
>>> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>>> -=-=-=-=-=-=-=-
>>>
>>> _______________________________________________
>>> Ataraid-list mailing list
>>> Ataraid-list@xxxxxxxxxx
>>> https://www.redhat.com/mailman/listinfo/ataraid-list
>>>
>> 
>> What is the actual bug being seen?  I am having trouble understanding
>> the forum messages.  Also, the second link points to a 
>discussion about
>> firefox, not RAID.  Is this issue beeing seen in windows and Linux?
>> This motherboard has two SATA controllers on it, one Marvell and one
>> ICH9; are you sure that this is confusing things?
>> 
>> Thanks,
>> 
>> Jason
>> 
>

_______________________________________________
Ataraid-list mailing list
Ataraid-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ataraid-list

[Index of Archives]     [Linux RAID]     [Linux Device Mapper]     [Linux IDE]     [Linux SCSI]     [Kernel]     [Linux Books]     [Linux Admin]     [GFS]     [RPM]     [Yosemite Campgrounds]     [AMD 64]

  Powered by Linux