Re: xfs_repair deleting realtime files.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/21/12 10:51 AM, Anand Tiwari wrote:
> 
> 
> On Thu, Sep 20, 2012 at 11:00 PM, Eric Sandeen <sandeen@xxxxxxxxxxx
> <mailto:sandeen@xxxxxxxxxxx>> wrote:
> 
> On 9/20/12 7:40 PM, Anand Tiwari wrote:
>> Hi All,
>> 
>> I have been looking into an issue with xfs_repair with realtime sub
>> volume. some times while running xfs_repair I see following errors
>> 
>> ---------------------------- data fork in rt inode 134 claims used
>> rt block 19607 bad data fork in inode 134 would have cleared inode
>> 134 data fork in rt inode 135 claims used rt block 29607 bad data
>> fork in inode 135 would have cleared inode 135 - agno = 1 - agno =
>> 2 - agno = 3 - process newly discovered inodes... Phase 4 - check
>> for duplicate blocks... - setting up duplicate extent list... -
>> check for inodes claiming duplicate blocks... - agno = 0 - agno =
>> 1 - agno = 2 - agno = 3 entry "test-011" in shortform directory 128
>> references free inode 134 would have junked entry "test-011" in
>> directory inode 128 entry "test-0" in shortform directory 128
>> references free inode 135 would have junked entry "test-0" in
>> directory inode 128 data fork in rt ino 134 claims dup rt
>> extent,off - 0, start - 7942144, count 2097000 bad data fork in
>> inode 134 would have cleared inode 134 data fork in rt ino 135
>> claims dup rt extent,off - 0, start - 13062144, count 2097000 bad
>> data fork in inode 135 would have cleared inode 135 No modify flag
>> set, skipping phase 5 ------------------------
>> 
>> Here is the bmap for both inodes.
>> 
>> xfs_db> inode 135 xfs_db> bmap data offset 0 startblock 13062144
>> (12/479232) count 2097000 flag 0 data offset 2097000 startblock
>> 15159144 (14/479080) count 2097000 flag 0 data offset 4194000
>> startblock 17256144 (16/478928) count 2097000 flag 0 data offset
>> 6291000 startblock 19353144 (18/478776) count 2097000 flag 0 data
>> offset 8388000 startblock 21450144 (20/478624) count 2097000 flag
>> 0 data offset 10485000 startblock 23547144 (22/478472) count
>> 2097000 flag 0 data offset 12582000 startblock 25644144 (24/478320)
>> count 2097000 flag 0 data offset 14679000 startblock 27741144
>> (26/478168) count 2097000 flag 0 data offset 16776000 startblock
>> 29838144 (28/478016) count 2097000 flag 0 data offset 18873000
>> startblock 31935144 (30/477864) count 1607000 flag 0 xfs_db> inode
>> 134 xfs_db> bmap data offset 0 startblock 7942144 (7/602112) count
>> 2097000 flag 0 data offset 2097000 startblock 10039144 (9/601960)
>> count 2097000 flag 0 data offset 4194000 startblock 12136144
>> (11/601808) count 926000 flag 0
> 
> It's been a while since I thought about realtime, but -
> 
> That all seems fine, I don't see anything overlapping there, they
> are all perfectly adjacent, though of interesting size.
> 
>> 
>> by looking into xfs_repair code, it looks like repair does not
>> handle a case where we have more than one extent in a real-time
>> extent. following is code from repair/dinode.c: process_rt_rec
> 
> "more than one extent in a real-time extent?"  I'm not sure what that
> means.
> 
> Every extent above is length 2097000 blocks, and they are adjacent. 
> But you say your realtime extent size is 512 blocks ... which doesn't
> go into 2097000 evenly.   So that's odd, at least.
> 
> 
> well, lets look at first extent
>> data offset 0 startblock 13062144 (12/479232) count 2097000 flag 0 
>> data offset 2097000 startblock 15159144 (14/479080) count 2097000
>> flag 0
> startblock is aligned and rtext is 25512,  since the blockcount is
> not multiple of 512, last realtime extent ( 25512 + 4095) is
> partially used, 360 blks second extent start from realtime extent
> 29607 (ie 25512 + 4095).  so, yes, extents are not overlapping, but
> 29607 realtime extent is shared by two extents. Now once xfs_repair
> detects this case in phase 2, it bails out and clears that inode. I
> think  search for duplicate extent is done in phase 4, but inode is
> marked already.

... ok I realize I was misunderstanding some things about the realtime
volume.  (It's been a very long time since I thought about it).  Still,
I'd like to look at the metadump image if possible.

Thanks,
-Eric

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux