Re: xfs_repair segfaults

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 1, 2013 at 12:17 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Thu, Feb 28, 2013 at 04:22:08PM +0100, Ole Tange wrote:
:
>> I forced a RAID online. I have done that before and xfs_repair
>> normally removes the last hour of data or so, but saves everything
>> else.
>
> Why did you need to force it online?

More than 2 harddisks went offline. We have seen that before and it is
not due to bad harddisks. It may be due to driver/timings/controller.

The alternative to forcing it online would be to read back a backup.
Since we are talking 100 TB of data reading back the backup can take a
week and will set us back to the last backup (which is more than a day
old). So it is preferable to force the last failing harddisk online
even though that causes us to loose a few hours of work.

>> Today that did not work:
>>
>> /usr/local/src/xfsprogs-3.1.10/repair# ./xfs_repair -n /dev/md5p1
>> Phase 1 - find and verify superblock...
>> Phase 2 - using internal log
>>         - scan filesystem freespace and inode maps...
>> flfirst 232 in agf 91 too large (max = 128)
>
> Can you run:
>
> # xfs_db -c "agf 91" -c p /dev/md5p1
>
> And post the output?

# xfs_db -c "agf 91" -c p /dev/md5p1
xfs_db: cannot init perag data (117)
magicnum = 0x58414746
versionnum = 1
seqno = 91
length = 268435200
bnoroot = 295199
cntroot = 13451007
bnolevel = 2
cntlevel = 2
flfirst = 232
fllast = 32
flcount = 191
freeblks = 184285136
longest = 84709383
btreeblks = 24

The partition has earlier been mounted with -o inode64.

/Ole

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux