Re: xfs_repair fails: mismatched UUID?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/17/17 12:21 AM, Eric Sandeen wrote:
> On 8/17/17 12:15 AM, Link Dupont wrote:
>> Hello,
>>
>> I have a JMicron USB RAID enclosure that is exhibiting read failures.
>>
>>> [59996.137762] sd 5:0:0:0: [sdd] tag#0 FAILED Result: hostbyte=DID_OK
>>> driverbyte=DRIVER_SENSE
>>> [59996.137786] sd 5:0:0:0: [sdd] tag#0 Sense Key : Aborted Command [current] 
>>> [59996.137792] sd 5:0:0:0: [sdd] tag#0 Add. Sense: No additional sense
>>> information
>>> [59996.137798] sd 5:0:0:0: [sdd] tag#0 CDB: Read(16) 88 00 00 00 00 00 a6 42
>>> 1d d8 00 00 00 08 00 00
>>> [59996.137804] blk_update_request: I/O error, dev sdd, sector 2789350872
>>> [59996.137816] Buffer I/O error on dev sdd, logical block 348668859, async
>>> page read
>>>
>>
>> I ran ddrescue to block copy the device to another external USB drive. This
>> appears to have been successful (though not without errors). The new drive shows
>> an identical UUID to the original.
>>
>>> # xfs_admin -u /dev/sdc1
>>> UUID = 7b2fc6f7-0f6b-40fb-b264-75d25e9d0d1e
>>> # xfs_admin -u /dev/sdd1
>>> UUID = 7b2fc6f7-0f6b-40fb-b264-75d25e9d0d1e
>>
>> I'm still unable to mount the file-system on sdc1.
>>
>>> # mount /dev/sdc1 mnt
>>> mount: /root/mnt: mount(2) system call failed: Structure needs cleaning.
>>
>> I ran xfs_repair.
>>
>>> # xfs_repair /dev/sdc1
>>> Phase 1 - find and verify superblock...
>>>         - reporting progress in intervals of 15 minutes
>>> Phase 2 - using internal log
>>>         - zero log...
>>> * ERROR: mismatched uuid in log
>>> *            SB : 7b2fc6f7-0f6b-40fb-b264-75d25e9d0d1e
>>> *            log: 00000001-0038-8e56-0000-000100388e2e
> 
> Thjat looks suspiciously /not/ like a UUID.
> 
> Which makes me think that the "log" was not the log, and your enclosure
> scrambled data around somehow.
> 
> Forcing that mess through repair may have yielded more harm than good.
> So, good thing it was on the clone.
> 
> Did the enclosure contain multiple disks?  Did they possibly get
> re-ordered or otherwise scrambled?

Oh right, you did say RAID enclosure.

My money is on the enclosure no longer presenting you with sectors in
the original order, or in an otherwise severely damaged form, for
whatever reason.

IOWS: I think this is a storage problem that xfs_repair can't solve.

-Eric

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux