Re: xfs_repair segfaults

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/28/13 9:22 AM, Ole Tange wrote:
> I forced a RAID online. I have done that before and xfs_repair
> normally removes the last hour of data or so, but saves everything
> else.
> 
> Today that did not work:
> 
> /usr/local/src/xfsprogs-3.1.10/repair# ./xfs_repair -n /dev/md5p1
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
>         - scan filesystem freespace and inode maps...
> flfirst 232 in agf 91 too large (max = 128)
> Segmentation fault (core dumped)

FWIW, the fs in question seems to need a log replay, so 
xfs_repair -n would find it in a worse state...
I had forgotten that xfs_repair -n won't complain about
a dirty log.  Seems like it should.

But, the log is corrupt enough that it won't replay:

XFS (loop0): Mounting Filesystem
XFS (loop0): Starting recovery (logdev: internal)
ffff88036e7cd800: 58 41 47 46 00 00 00 01 00 00 00 5b 0f ff ff 00  XAGF.......[....
XFS (loop0): Internal error xfs_alloc_read_agf at line 2146 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa033d009

so really this'll require xfs_repair -L

xfs_repair -L doesn't segfault though, FWIW.

I'll try to look into the -n segfault in any case.

-Eric

> Core put in: http://dna.ku.dk/~tange/tmp/xfs_repair.core.bz2
> 
> I tried using the git-version, too, but could not get that to compile.
> 
> # uname -a
> Linux franklin 3.2.0-0.bpo.4-amd64 #1 SMP Debian 3.2.35-2~bpo60+1
> x86_64 GNU/Linux
> 
> # ./xfs_repair -V
> xfs_repair version 3.1.10
> 
> # cat /proc/cpuinfo |grep MH | wc
>      64     256    1280
> 
> # cat /proc/partitions |grep md5
>    9        5 125024550912 md5
>  259        0 107521114112 md5p1
>  259        1 17503434752 md5p2
> 
> # cat /proc/mdstat
> Personalities : [raid0] [raid6] [raid5] [raid4]
> md5 : active raid0 md1[0] md4[3] md3[2] md2[1]
>       125024550912 blocks super 1.2 512k chunks
> 
> md1 : active raid6 sdd[1] sdi[9] sdq[13] sdau[7] sdt[10] sdg[5] sdf[4] sde[2]
>       31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2
> [10/8] [_UU_UUUUUU]
>       bitmap: 2/2 pages [8KB], 1048576KB chunk
> 
> md4 : active raid6 sdo[13] sdu[9] sdad[8] sdh[7] sdc[6] sds[11]
> sdap[3] sdao[2] sdk[1]
>       31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2
> [10/8] [_UUUU_UUUU]
>       [>....................]  recovery =  2.1% (84781876/3907017344)
> finish=2196.4min speed=29003K/sec
>       bitmap: 2/2 pages [8KB], 1048576KB chunk
> 
> md2 : active raid6 sdac[0] sdal[9] sdak[8] sdaj[7] sdai[6] sdah[5]
> sdag[4] sdaf[3] sdae[2] sdr[10]
>       31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2
> [10/10] [UUUUUUUUUU]
>       bitmap: 0/2 pages [0KB], 1048576KB chunk
> 
> md3 : active raid6 sdaq[0] sdab[9] sdaa[8] sdb[7] sdy[6] sdx[5] sdw[4]
> sdv[3] sdz[10] sdj[1]
>       31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2
> [10/10] [UUUUUUUUUU]
>       bitmap: 0/2 pages [0KB], 1048576KB chunk
> 
> unused devices: <none>
> 
> # smartctl -a /dev/sdau|grep Model
> Device Model:     Hitachi HDS724040ALE640
> 
> # hdparm -W /dev/sdau
> /dev/sdau:
>  write-caching =  0 (off)
> 
> # dmesg
> [ 3745.914280] xfs_repair[25300]: segfault at 7f5d9282b000 ip
> 000000000042d068 sp 00007f5da3183dd0 error 4 in
> xfs_repair[400000+7f000]
> 
> 
> /Ole
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux