since kernel and udev update unable to access some directories on xfs but xfs_repair doesn't repair anything

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey there,

due to a lack of time I didn't touch my fileserver for a few weeks. After booting it up again today, I decided to update from 2.6.32.1 to 2.6.33.1 since udev was complaining and since then I'm unable to access some files/directories. As far as I can tell, only stuff I modified/added the last time the machine was up is affected.

When e.g. trying to do an ls in a folder there is the following output shown:

serverteil:/raid10# ls -l
ls: cannot access images: Invalid argument
total 64
drwxr-xr-x   8 root root   154 2009-12-19 17:50 one
drwxr-xr-x  12 root root  4096 2009-12-19 22:02 two
drwxr-xr-x  17 root root  4096 2009-07-04 12:52 three
drwxr-xr-x  16 root root  4096 2009-03-23 06:07 three_backup
??????????   ? ?    ?        ?                ? images
drwxr-xr-x   3 root root    26 2009-12-30 10:01 five
drwxr-xr-x   6 root root   132 2009-12-22 10:00 six
d---------  31 root root  4096 2009-12-19 20:36 seven
...

Except the images folder, I can access everything and the data is intact. Inside accessible directories again are some unaccessible directories.

After I noticed the odd behaviour I promtly unmounted the device and checked dmesg: Nothing out of the ordinary.

xfs_check told me al is fine. xfs_repair also.

serverteil:~# xfs_repair /dev/mapper/raid10
Phase 1 - find and verify superblock...
Phase 2 - using internal log
       - zero log...
       - scan filesystem freespace and inode maps...
       - found root inode chunk
Phase 3 - for each AG...
       - scan and clear agi unlinked lists...
       - process known inodes and perform inode discovery...
       - agno = 0
       - agno = 1
       - agno = 2
       - agno = 3
       - agno = 4
       - agno = 5
       - agno = 6
       - agno = 7
       - agno = 8
       - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
       - setting up duplicate extent list...
       - check for inodes claiming duplicate blocks...
       - agno = 0
       - agno = 1
       - agno = 3
       - agno = 4
       - agno = 5
       - agno = 6
       - agno = 2
       - agno = 7
       - agno = 8
Phase 5 - rebuild AG headers and trees...
       - reset superblock...
Phase 6 - check inode connectivity...
       - resetting contents of realtime bitmap and summary inodes
       - traversing filesystem ...
       - traversal finished ...
       - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
serverteil:~#

serverteil:~# xfs_repair -V
xfs_repair version 3.1.1
serverteil:~#

serverteil:~# xfs_info /dev/mapper/raid10
meta-data=/dev/mapper/raid10 isize=256 agcount=9, agsize=268435455 blks
        =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=2342663424, imaxpct=5
        =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=32768, version=2
        =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
serverteil:~#

The underlying device is a mdadm created software raid10:
md1 : active raid10 sdbo[44] sdr[0] sdbb[47] sdbq[46] sdaz[45] sdaw[42] sdbl[41] sdau[40] sdat[39] sdbi[38] sdar[37] sdbg[36] sdbf[35] sdao[34] sdbd[33] sdam[32] sdac[31] sdal[30] sdaa[29] sdaj[28] sdx[27] sdah[26] sdp[25] sdbc[24] sdaf[23] sdi[22] sdn[21] sdy[20] sdak[19] sdl[18] sdg[17] sdv[16] sdai[15] sdj[14] sdm[13] sdt[12] sdag[11] sdk[10] sdad[9] sdq[8] sdw[7] sdo[6] sdu[5] sdab[4] sds[3] sdz[2] sdh[1] 9370653696 blocks super 1.0 64K chunks 2 near-copies [48/46] [UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU__UUU] [=====>...............] recovery = 25.1% (98112640/390443904) finish=181.6min speed=26816K/sec

unused devices: <none>

But since the raid is intact and consistand I doubt it has something to do with it.

Any hints on getting access to those directories back or are they gone for good?

Thanks in advance

Tobias


--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux