Oops with Sparc/RAID1/ext3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've just been setting up some md devices on my Ultra1 and I got an oops
with kernel 2.4.18.

Here's what I did;

created /dev/md4 using the following raidtab

# /

raiddev 	/dev/md0
raid-level	1
nr-raid-disks	2
nr-spare-disks	0
chunk-size	64

device		/dev/sdb1
raid-disk	0

device		/dev/sda1
failed-disk	1

# /usr

raiddev		/dev/md1
raid-level	1
nr-raid-disks	2
nr-spare-disks	0
chunk-size	64

device		/dev/sdb2
raid-disk	0

device		/dev/sda2
failed-disk	1

# /var

raiddev 	/dev/md2
raid-level	1
nr-raid-disks	2
nr-spare-disks	0
chunk-size 	64

device		/dev/sdb4
raid-disk	0

device 		/dev/sda4
failed-disk	1

# SWAP

raiddev		/dev/md3
raid-level	1
nr-raid-disks	2
nr-spare-disks	0
chunk-size	64

device		/dev/sdb5
raid-disk	0

device		/dev/sda5
failed-disk	1

# /export

raiddev		/dev/md4
raid-level	1
nr-raid-disks	2
nr-spare-disks	0
chunk-size	64

device		/dev/sdb7
raid-disk	0
device		/dev/sda7
failed-disk	1

then;

lemur:~# cat /proc/mdstat 
Personalities : [raid1] 
read_ahead 1024 sectors
md4 : active raid1 scsi/host0/bus0/target1/lun0/part7[0]
	29063232 blocks [2/1] [U_]
	    
md0 : active raid1 scsi/host0/bus0/target1/lun0/part4[1] scsi/host0/bus0/target1/lun0/part2[0]
        1444416 blocks [2/2] [UU]
		        
unused devices: <none>
lemur:~# mdadm -D /dev/md4
/dev/md4:
        Version : 00.90.00
  Creation Time : Fri Apr  5 14:24:00 2002
     Raid Level : raid1
     Array Size : -296896
    Device Size : 29063232 (27.71 GiB 29.76 GB)
     Raid Disks : 2
    Total Disks : 2
Preferred Minor	: 4
   Persistance  : Superblock is persistant
    Update Time : Fri Apr  5 14:30:23 2002
          State : dirty, no-errors
  Active Drives : 1
 Working Drives : 1
  Failed Drives : 1
   Spare Drives : 0


  Number   Major   Minor   RaidDisk   State
     0       8       23        0      active sync /dev/scsi/host0/bus0/target1/lun0/part7
     1       0        0        1      faulty
        UUID :  6795899a:53d8f727:13dfb6ea:29c6bd7e
lemur:~# mke2fs -j /dev/md4
mke2fs 1.27 (8-Mar-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
3637248 inodes, 7265808 blocks
363290 blocks (5.00%) reserved for the super user
First data block=0
222 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
	2654208,         4096000

Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to
override.
lemur:~# mount /dev/md4 /mnt/tmp
lemur:~# cd /mnt/tmp
lemur:/mnt/tmp# ls
Killed

This produces the following oops;

Apr  5 14:37:44 lemur kernel: data_access_exception:
SFSR[0000000000801009] SFAR[fffff802be82977c], going.
Apr  5 14:37:44 lemur kernel:               \|/ ____ \|/
Apr  5 14:37:44 lemur kernel:               "@'/ .. \`@"
Apr  5 14:37:44 lemur kernel:               /_| \__/ |_\
Apr  5 14:37:44 lemur kernel:                  \__U_/
Apr  5 14:37:44 lemur kernel: ls(743): Dax
Apr  5 14:37:44 lemur kernel: CPU[0]: local_irq_count[0] irqs_running[0]
Apr  5 14:37:44 lemur kernel: TSTATE: 0000004411009600 TPC:
0000000002010dbc TNPC: 0000000002010dc0 Y: 07000000    Not tainted
Apr  5 14:37:44 lemur kernel: g0: 0000000000012000 g1: fffff800137f0818
g2: 0000000000000001 g3: ffffffffffffffd0
Apr  5 14:37:44 lemur kernel: g4: fffff80000000000 g5: fffee002ab038f80
g6: fffff800019fc000 g7: 0000000000000000
Apr  5 14:37:44 lemur kernel: o0: 0000000000001028 o1: fffff800137f081c
o2: fffff8001386b4c0 o3: fffff80012f8fce0
Apr  5 14:37:44 lemur kernel: o4: 0000000000000008 o5: 00000000006b1518
sp: fffff800019fee61 ret_pc: fffff800137f0820
Apr  5 14:37:44 lemur kernel: l0: fffff800137f0800 l1: fffff800137f0c48
l2: 00000000005e8400 l3: fffff800137f0c18
Apr  5 14:37:44 lemur kernel: l4: 0000000000000002 l5: fffff800007fcc30
l6: 0000000000000094 l7: fffff80013b412a0
Apr  5 14:37:44 lemur kernel: i0: 0000000000000030 i1: fffee002ab038f5c
i2: 0000000000000000 i3: 0000000000000000
Apr  5 14:37:44 lemur kernel: i4: 0000000000000002 i5: fffff800137f0800
i6: fffff800019fef21 i7: 0000000002010fd0
Apr  5 14:37:44 lemur kernel: Caller[0000000002010fd0]
Apr  5 14:37:44 lemur kernel: Caller[00000000020003c0]
Apr  5 14:37:44 lemur kernel: Caller[00000000004ff670]
Apr  5 14:37:44 lemur kernel: Caller[00000000004ff760]
Apr  5 14:37:44 lemur kernel: Caller[00000000004ff930]
Apr  5 14:37:44 lemur kernel: Caller[0000000000499024]
Apr  5 14:37:44 lemur kernel: Caller[000000000049596c]
Apr  5 14:37:44 lemur kernel: Caller[000000000047acb4]
Apr  5 14:37:44 lemur kernel: Caller[000000000047b210]
Apr  5 14:37:44 lemur kernel: Caller[0000000000410af4]
Apr  5 14:37:44 lemur kernel: Caller[00000000700f67cc]
Apr  5 14:37:44 lemur kernel: Instruction DUMP: f84763d8  b2067fdc
84073fff <c606400f> 80a0e000  12480013  b938a000  c4064009  80a0a000


ksymoops gives;

>>PC;  02010dbc <[raid1]raid1_read_balance+19c/240>   <=====

>>g0; 00012000 Before first symbol
>>g1; fffff800137f0818 <END_OF_CODE+fffff800117c5bd5/????>
>>g3; ffffffffffffffd0 <END_OF_CODE+fffffffffdfd538d/????>
>>g4; fffff80000000000 <END_OF_CODE+fffff7fffdfd53bd/????>
>>g5; fffee002ab038f80 <END_OF_CODE+fffee002a900e33d/????>
>>g6; fffff800019fc000 <END_OF_CODE+fffff7ffff9d13bd/????>
>>o0; 00001028 Before first symbol
>>o1; fffff800137f081c <END_OF_CODE+fffff800117c5bd9/????>
>>o2; fffff8001386b4c0 <END_OF_CODE+fffff8001184087d/????>
>>o3; fffff80012f8fce0 <END_OF_CODE+fffff80010f6509d/????>
>>o5; 006b1518 <inactive_list+0/10>
>>sp; fffff800019fee61 <END_OF_CODE+fffff7ffff9d421e/????>
>>ret_pc; fffff800137f0820 <END_OF_CODE+fffff800117c5bdd/????>
>>l0; fffff800137f0800 <END_OF_CODE+fffff800117c5bbd/????>
>>l1; fffff800137f0c48 <END_OF_CODE+fffff800117c6005/????>
>>l2; 005e8400 <sysrq_reboot_op+0/18>
>>l3; fffff800137f0c18 <END_OF_CODE+fffff800117c5fd5/????>
>>l5; fffff800007fcc30 <END_OF_CODE+fffff7fffe7d1fed/????>
>>l7; fffff80013b412a0 <END_OF_CODE+fffff80011b1665d/????>
>>i1; fffee002ab038f5c <END_OF_CODE+fffee002a900e319/????>
>>i5; fffff800137f0800 <END_OF_CODE+fffff800117c5bbd/????>
>>i6; fffff800019fef21 <END_OF_CODE+fffff7ffff9d42de/????>
>>i7; 02010fd0 <[raid1]raid1_make_request+170/3a0>

Trace; 02010fd0 <[raid1]raid1_make_request+170/3a0>
Trace; 020003c0 <[md]md_make_request+40/a0>
Trace; 004ff670 <generic_make_request+d0/180>
Trace; 004ff760 <submit_bh+40/80>
Trace; 004ff930 <ll_rw_block+190/220>
Trace; 00499024 <ext3_bread+44/a0>
Trace; 0049596c <ext3_readdir+6c/440>
Trace; 0047acb4 <vfs_readdir+b4/140>
Trace; 0047b210 <sys_getdents64+30/180>
Trace; 00410af4 <linux_sparc_syscall32+34/40>
Trace; 700f67cc <END_OF_CODE+6e0cbb89/????>

Code;  02010db0 <[raid1]raid1_read_balance+190/240>
00000000 <_PC>:
Code;  02010db0 <[raid1]raid1_read_balance+190/240>
   0:   f8 47 63 d8       unknown
Code;  02010db4 <[raid1]raid1_read_balance+194/240>
   4:   b2 06 7f dc       add  %i1, -36, %i1
Code;  02010db8 <[raid1]raid1_read_balance+198/240>
   8:   84 07 3f ff       add  %i4, -1, %g2
Code;  02010dbc <[raid1]raid1_read_balance+19c/240>   <=====
   c:   c6 06 40 0f       ld  [ %i1 + %o7 ], %g3   <=====
Code;  02010dc0 <[raid1]raid1_read_balance+1a0/240>
  10:   80 a0 e0 00       cmp  %g3, 0
Code;  02010dc4 <[raid1]raid1_read_balance+1a4/240>
  14:   12 48 00 13       unknown
Code;  02010dc8 <[raid1]raid1_read_balance+1a8/240>
  18:   b9 38 a0 00       sra  %g2, 0, %i4
Code;  02010dcc <[raid1]raid1_read_balance+1ac/240>
  1c:   c4 06 40 09       ld  [ %i1 + %o1 ], %g2
Code;  02010dd0 <[raid1]raid1_read_balance+1b0/240>
  20:   80 a0 a0 00       cmp  %g2, 0


Any clues?
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux