RE: dmraid panic when rebuilding and reboot occurs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

This issue was seen on RH 5.4, below is the steps to duplicate this issue. Console picture and log file are attached.

 

1.      Boot into RAID ROM(9.5.0.1037)

2.      Create a volume with Raid5 level(1TBx4)

3.      Boot from RHEL5.4 DVD

4.      Full installation with all package

5.      Boot into OS after installation finish

6.      Remove one of the raid volume HDDs

7.      Insert new empty HDD

8.      Run command “dmraid –rebuild {raid volume name} {new hdd location}” for rebuild

9.      Run command “dmraid –n >> log.txt” for dmraid log

10.  Reboot system and cannot boot into OS(dmraid.jpg)

11.  Reboot(reset) system and into RAID ROM(Raid volume status is Normal)

 

 

From: Fan, Haipao
Sent: Thursday, August 05, 2010 6:59 AM
To: 'ataraid-list@xxxxxxxxxx'
Subject: dmraid panic when rebuilding and reboot occurs

 

Hi,

 

I saw the system panic when reboot during rebuilding on a RAID 5 volume with RH 5.4 plus rc13-63 patch, the steps I did are:

12.   

13.  Boot into OROM

14.  Configure a bootable RAID 5 volume with the size of 100 GB from 4 HDDs

15.  Install RHLS 5.4 GA to this RAID 5 volume

16.  Boot RH 5.4 OS from RAID 5 volume

17.  Apply dmraid-1.0.0.rc13-63.el5.i386.rpm patch with force

18.  Enter “reboot” to reboot the system

19.  Un-plug one HDD (/dev/sdd)

20.  Plug-in a new empty HDD to the same slot

Enter “dmraid –rebuild isw_xxx_Volume0 /dev/sdd” to start rebuild

Enter “reboot” to reboot the system before rebuild is finish

System is unable to boot to OS due to panic or hang in the start-up process \

 

Have anyone seen the same issue before? Or is there newer patch that can fix this problem?

 

 

 

 

 

Attachment: DMRaid.JPG
Description: DMRaid.JPG

/dev/sda (isw):
0x000 sig: "  Intel Raid ISM Cfg Sig. 1.2.02"
0x020 check_sum: 3728822926
0x024 mpb_size: 584
0x028 family_num: 1008424060
0x02c generation_num: 7
0x030 error_log_size: 0
0x034 attributes: 2147483648
0x038 num_disks: 4
0x039 num_raid_devs: 1
0x03a error_log_pos: 0
0x03c cache_size: 0
0x040 orig_family_num: 1008424060
0x044 power_cycle_count: 0
0x048 bbm_log_size: 0
0x0d8 disk[0].serial: "        9QJ5B0JB"
0x0e8 disk[0].totalBlocks: 1953525168
0x0ec disk[0].scsiId: 0x0
0x0f0 disk[0].status: 0x53a
0x0f4 disk[0].owner_cfg_num: 0x0
0x108 disk[1].serial: "        9QJ588CP"
0x118 disk[1].totalBlocks: 1953525168
0x11c disk[1].scsiId: 0x10000
0x120 disk[1].status: 0x53a
0x124 disk[1].owner_cfg_num: 0x0
0x138 disk[2].serial: "        9WM073FV"
0x148 disk[2].totalBlocks: 3907029168
0x14c disk[2].scsiId: 0x20000
0x150 disk[2].status: 0x53a
0x154 disk[2].owner_cfg_num: 0x0
0x168 disk[3].serial: "        9QJ4X1HV"
0x178 disk[3].totalBlocks: 1953525168
0x17c disk[3].scsiId: 0x30000
0x180 disk[3].status: 0x53a
0x184 disk[3].owner_cfg_num: 0x0
0x198 isw_dev[0].volume: "         Volume0"
0x1ac isw_dev[0].SizeHigh: 0
0x1a8 isw_dev[0].SizeLow: 419430400
0x1b0 isw_dev[0].status: 0xc
0x1b4 isw_dev[0].reserved_blocks: 0
0x1b8 isw_dev[0].migr_priority: 0
0x1b9 isw_dev[0].num_sub_vol: 0
0x1ba isw_dev[0].tid: 0
0x1bb isw_dev[0].cng_master_disk: 0
0x1bc isw_dev[0].cache_policy: 0
0x1be isw_dev[0].cng_state: 0
0x1bf isw_dev[0].cng_sub_state: 0
0x1e8 isw_dev[0].vol.curr_migr_unit: 0
0x1ec isw_dev[0].vol.check_point_id: 0
0x1f0 isw_dev[0].vol.migr_state: 0
0x1f1 isw_dev[0].vol.migr_type: 0
0x1f2 isw_dev[0].vol.dirty: 0
0x1f3 isw_dev[0].vol.fs_state: 255
0x1f4 isw_dev[0].vol.verify_errors: 0
0x1f6 isw_dev[0].vol.verify_bad_blocks: 0
0x208 isw_dev[0].vol.map[0].pba_of_lba0: 0
0x20c isw_dev[0].vol.map[0].blocks_per_member: 139810568
0x210 isw_dev[0].vol.map[0].num_data_stripes: 1092268
0x214 isw_dev[0].vol.map[0].blocks_per_strip: 128
0x216 isw_dev[0].vol.map[0].map_state: 0
0x217 isw_dev[0].vol.map[0].raid_level: 5
0x218 isw_dev[0].vol.map[0].num_members: 4
0x219 isw_dev[0].vol.map[0].num_domains: 1
0x21a isw_dev[0].vol.map[0].failed_disk_num: 255
0x21b isw_dev[0].vol.map[0].ddf: 1
0x238 isw_dev[0].vol.map[0].disk_ord_tbl[0]: 0x0
0x23c isw_dev[0].vol.map[0].disk_ord_tbl[1]: 0x1
0x240 isw_dev[0].vol.map[0].disk_ord_tbl[2]: 0x2
0x244 isw_dev[0].vol.map[0].disk_ord_tbl[3]: 0x3

/dev/sdb (isw):
0x000 sig: "  Intel Raid ISM Cfg Sig. 1.2.02"
0x020 check_sum: 3728822926
0x024 mpb_size: 584
0x028 family_num: 1008424060
0x02c generation_num: 7
0x030 error_log_size: 0
0x034 attributes: 2147483648
0x038 num_disks: 4
0x039 num_raid_devs: 1
0x03a error_log_pos: 0
0x03c cache_size: 0
0x040 orig_family_num: 1008424060
0x044 power_cycle_count: 0
0x048 bbm_log_size: 0
0x0d8 disk[0].serial: "        9QJ5B0JB"
0x0e8 disk[0].totalBlocks: 1953525168
0x0ec disk[0].scsiId: 0x0
0x0f0 disk[0].status: 0x53a
0x0f4 disk[0].owner_cfg_num: 0x0
0x108 disk[1].serial: "        9QJ588CP"
0x118 disk[1].totalBlocks: 1953525168
0x11c disk[1].scsiId: 0x10000
0x120 disk[1].status: 0x53a
0x124 disk[1].owner_cfg_num: 0x0
0x138 disk[2].serial: "        9WM073FV"
0x148 disk[2].totalBlocks: 3907029168
0x14c disk[2].scsiId: 0x20000
0x150 disk[2].status: 0x53a
0x154 disk[2].owner_cfg_num: 0x0
0x168 disk[3].serial: "        9QJ4X1HV"
0x178 disk[3].totalBlocks: 1953525168
0x17c disk[3].scsiId: 0x30000
0x180 disk[3].status: 0x53a
0x184 disk[3].owner_cfg_num: 0x0
0x198 isw_dev[0].volume: "         Volume0"
0x1ac isw_dev[0].SizeHigh: 0
0x1a8 isw_dev[0].SizeLow: 419430400
0x1b0 isw_dev[0].status: 0xc
0x1b4 isw_dev[0].reserved_blocks: 0
0x1b8 isw_dev[0].migr_priority: 0
0x1b9 isw_dev[0].num_sub_vol: 0
0x1ba isw_dev[0].tid: 0
0x1bb isw_dev[0].cng_master_disk: 0
0x1bc isw_dev[0].cache_policy: 0
0x1be isw_dev[0].cng_state: 0
0x1bf isw_dev[0].cng_sub_state: 0
0x1e8 isw_dev[0].vol.curr_migr_unit: 0
0x1ec isw_dev[0].vol.check_point_id: 0
0x1f0 isw_dev[0].vol.migr_state: 0
0x1f1 isw_dev[0].vol.migr_type: 0
0x1f2 isw_dev[0].vol.dirty: 0
0x1f3 isw_dev[0].vol.fs_state: 255
0x1f4 isw_dev[0].vol.verify_errors: 0
0x1f6 isw_dev[0].vol.verify_bad_blocks: 0
0x208 isw_dev[0].vol.map[0].pba_of_lba0: 0
0x20c isw_dev[0].vol.map[0].blocks_per_member: 139810568
0x210 isw_dev[0].vol.map[0].num_data_stripes: 1092268
0x214 isw_dev[0].vol.map[0].blocks_per_strip: 128
0x216 isw_dev[0].vol.map[0].map_state: 0
0x217 isw_dev[0].vol.map[0].raid_level: 5
0x218 isw_dev[0].vol.map[0].num_members: 4
0x219 isw_dev[0].vol.map[0].num_domains: 1
0x21a isw_dev[0].vol.map[0].failed_disk_num: 255
0x21b isw_dev[0].vol.map[0].ddf: 1
0x238 isw_dev[0].vol.map[0].disk_ord_tbl[0]: 0x0
0x23c isw_dev[0].vol.map[0].disk_ord_tbl[1]: 0x1
0x240 isw_dev[0].vol.map[0].disk_ord_tbl[2]: 0x2
0x244 isw_dev[0].vol.map[0].disk_ord_tbl[3]: 0x3

/dev/sdd (isw):
0x000 sig: "  Intel Raid ISM Cfg Sig. 1.2.02"
0x020 check_sum: 3728822926
0x024 mpb_size: 584
0x028 family_num: 1008424060
0x02c generation_num: 7
0x030 error_log_size: 0
0x034 attributes: 2147483648
0x038 num_disks: 4
0x039 num_raid_devs: 1
0x03a error_log_pos: 0
0x03c cache_size: 0
0x040 orig_family_num: 1008424060
0x044 power_cycle_count: 0
0x048 bbm_log_size: 0
0x0d8 disk[0].serial: "        9QJ5B0JB"
0x0e8 disk[0].totalBlocks: 1953525168
0x0ec disk[0].scsiId: 0x0
0x0f0 disk[0].status: 0x53a
0x0f4 disk[0].owner_cfg_num: 0x0
0x108 disk[1].serial: "        9QJ588CP"
0x118 disk[1].totalBlocks: 1953525168
0x11c disk[1].scsiId: 0x10000
0x120 disk[1].status: 0x53a
0x124 disk[1].owner_cfg_num: 0x0
0x138 disk[2].serial: "        9WM073FV"
0x148 disk[2].totalBlocks: 3907029168
0x14c disk[2].scsiId: 0x20000
0x150 disk[2].status: 0x53a
0x154 disk[2].owner_cfg_num: 0x0
0x168 disk[3].serial: "        9QJ4X1HV"
0x178 disk[3].totalBlocks: 1953525168
0x17c disk[3].scsiId: 0x30000
0x180 disk[3].status: 0x53a
0x184 disk[3].owner_cfg_num: 0x0
0x198 isw_dev[0].volume: "         Volume0"
0x1ac isw_dev[0].SizeHigh: 0
0x1a8 isw_dev[0].SizeLow: 419430400
0x1b0 isw_dev[0].status: 0xc
0x1b4 isw_dev[0].reserved_blocks: 0
0x1b8 isw_dev[0].migr_priority: 0
0x1b9 isw_dev[0].num_sub_vol: 0
0x1ba isw_dev[0].tid: 0
0x1bb isw_dev[0].cng_master_disk: 0
0x1bc isw_dev[0].cache_policy: 0
0x1be isw_dev[0].cng_state: 0
0x1bf isw_dev[0].cng_sub_state: 0
0x1e8 isw_dev[0].vol.curr_migr_unit: 0
0x1ec isw_dev[0].vol.check_point_id: 0
0x1f0 isw_dev[0].vol.migr_state: 0
0x1f1 isw_dev[0].vol.migr_type: 0
0x1f2 isw_dev[0].vol.dirty: 0
0x1f3 isw_dev[0].vol.fs_state: 255
0x1f4 isw_dev[0].vol.verify_errors: 0
0x1f6 isw_dev[0].vol.verify_bad_blocks: 0
0x208 isw_dev[0].vol.map[0].pba_of_lba0: 0
0x20c isw_dev[0].vol.map[0].blocks_per_member: 139810568
0x210 isw_dev[0].vol.map[0].num_data_stripes: 1092268
0x214 isw_dev[0].vol.map[0].blocks_per_strip: 128
0x216 isw_dev[0].vol.map[0].map_state: 0
0x217 isw_dev[0].vol.map[0].raid_level: 5
0x218 isw_dev[0].vol.map[0].num_members: 4
0x219 isw_dev[0].vol.map[0].num_domains: 1
0x21a isw_dev[0].vol.map[0].failed_disk_num: 255
0x21b isw_dev[0].vol.map[0].ddf: 1
0x238 isw_dev[0].vol.map[0].disk_ord_tbl[0]: 0x0
0x23c isw_dev[0].vol.map[0].disk_ord_tbl[1]: 0x1
0x240 isw_dev[0].vol.map[0].disk_ord_tbl[2]: 0x2
0x244 isw_dev[0].vol.map[0].disk_ord_tbl[3]: 0x3

/dev/sde (isw):
0x000 sig: "  Intel Raid ISM Cfg Sig. 1.2.02"
0x020 check_sum: 3728822926
0x024 mpb_size: 584
0x028 family_num: 1008424060
0x02c generation_num: 7
0x030 error_log_size: 0
0x034 attributes: 2147483648
0x038 num_disks: 4
0x039 num_raid_devs: 1
0x03a error_log_pos: 0
0x03c cache_size: 0
0x040 orig_family_num: 1008424060
0x044 power_cycle_count: 0
0x048 bbm_log_size: 0
0x0d8 disk[0].serial: "        9QJ5B0JB"
0x0e8 disk[0].totalBlocks: 1953525168
0x0ec disk[0].scsiId: 0x0
0x0f0 disk[0].status: 0x53a
0x0f4 disk[0].owner_cfg_num: 0x0
0x108 disk[1].serial: "        9QJ588CP"
0x118 disk[1].totalBlocks: 1953525168
0x11c disk[1].scsiId: 0x10000
0x120 disk[1].status: 0x53a
0x124 disk[1].owner_cfg_num: 0x0
0x138 disk[2].serial: "        9WM073FV"
0x148 disk[2].totalBlocks: 3907029168
0x14c disk[2].scsiId: 0x20000
0x150 disk[2].status: 0x53a
0x154 disk[2].owner_cfg_num: 0x0
0x168 disk[3].serial: "        9QJ4X1HV"
0x178 disk[3].totalBlocks: 1953525168
0x17c disk[3].scsiId: 0x30000
0x180 disk[3].status: 0x53a
0x184 disk[3].owner_cfg_num: 0x0
0x198 isw_dev[0].volume: "         Volume0"
0x1ac isw_dev[0].SizeHigh: 0
0x1a8 isw_dev[0].SizeLow: 419430400
0x1b0 isw_dev[0].status: 0xc
0x1b4 isw_dev[0].reserved_blocks: 0
0x1b8 isw_dev[0].migr_priority: 0
0x1b9 isw_dev[0].num_sub_vol: 0
0x1ba isw_dev[0].tid: 0
0x1bb isw_dev[0].cng_master_disk: 0
0x1bc isw_dev[0].cache_policy: 0
0x1be isw_dev[0].cng_state: 0
0x1bf isw_dev[0].cng_sub_state: 0
0x1e8 isw_dev[0].vol.curr_migr_unit: 0
0x1ec isw_dev[0].vol.check_point_id: 0
0x1f0 isw_dev[0].vol.migr_state: 0
0x1f1 isw_dev[0].vol.migr_type: 0
0x1f2 isw_dev[0].vol.dirty: 0
0x1f3 isw_dev[0].vol.fs_state: 255
0x1f4 isw_dev[0].vol.verify_errors: 0
0x1f6 isw_dev[0].vol.verify_bad_blocks: 0
0x208 isw_dev[0].vol.map[0].pba_of_lba0: 0
0x20c isw_dev[0].vol.map[0].blocks_per_member: 139810568
0x210 isw_dev[0].vol.map[0].num_data_stripes: 1092268
0x214 isw_dev[0].vol.map[0].blocks_per_strip: 128
0x216 isw_dev[0].vol.map[0].map_state: 0
0x217 isw_dev[0].vol.map[0].raid_level: 5
0x218 isw_dev[0].vol.map[0].num_members: 4
0x219 isw_dev[0].vol.map[0].num_domains: 1
0x21a isw_dev[0].vol.map[0].failed_disk_num: 255
0x21b isw_dev[0].vol.map[0].ddf: 1
0x238 isw_dev[0].vol.map[0].disk_ord_tbl[0]: 0x0
0x23c isw_dev[0].vol.map[0].disk_ord_tbl[1]: 0x1
0x240 isw_dev[0].vol.map[0].disk_ord_tbl[2]: 0x2
0x244 isw_dev[0].vol.map[0].disk_ord_tbl[3]: 0x3

_______________________________________________
Ataraid-list mailing list
Ataraid-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ataraid-list

[Index of Archives]     [Linux RAID]     [Linux Device Mapper]     [Linux IDE]     [Linux SCSI]     [Kernel]     [Linux Books]     [Linux Admin]     [GFS]     [RPM]     [Yosemite Campgrounds]     [AMD 64]

  Powered by Linux