Re: filesystem shrinks after using xfs_repair

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 23, 2010 at 06:08:08PM -0700, Eli Morris wrote:
> On Jul 23, 2010, at 5:54 PM, Dave Chinner wrote:
> > On Fri, Jul 23, 2010 at 01:30:40AM -0700, Eli Morris wrote:
> >> I think the raid tech support and me found and corrected the
> >> hardware problems associated with the RAID. I'm still having the
> >> same problem though. I expanded the filesystem to use the space of
> >> the now corrected RAID and that seems to work OK. I can write
> >> files to the new space OK. But then, if I run xfs_repair on the
> >> volume, the newly added space disappears and there are tons of
> >> error messages from xfs_repair (listed below).
> > 
> > Can you post the full output of the xfs_repair? The superblock is
> > the first thing that is checked and repaired, so if it is being
> > "repaired" to reduce the size of the volume then all the other errors
> > are just a result of that. e.g. the grow could be leaving stale
> > secndary superblocks around and repair is seeing a primary/secondary
> > mismatch and restoring the secondary which has the size parameter
> > prior to the grow....
> > 
> > Also, the output of 'cat /proc/partitions' would be interesting
> > from before the grow, after the grow (when everything is working),
> > and again after the xfs_repair when everything goes bad....
> 
> Thanks for replying. Here is the output I think you're looking for....

Sure is. The underlying device does not change configuration, and:

> [root@nimbus /]# xfs_repair /dev/mapper/vg1-vol5
> Phase 1 - find and verify superblock...
> writing modified primary superblock
> Phase 2 - using internal log

There's a smoking gun - the primary superblock was modified in some
way. Looks like the only way we can get this occurring without an
error or warning being emitted is if repair found more superblocks
with the old geometry in it them than the new geometry.

With a current kernel, growfs is supposed to update every single
secondary superblock, so I can't see how this could be occurring.
However, can you remind me what kernel your are running and gather
the following information?

Run this before the grow:

# echo 3 > /proc/sys/vm/drop-caches
# for ag in `seq 0 1 125`; do
> xfs_db -r -c "sb $ag" -c "p agcount" -c "p dblocks" <device>
> done

Then run the grow, sync, and unmount the filesystem. After that,
re-run the above xfs_db command and post the output of both so I can
see what growfs is actually doing to the secondary superblocks?

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux