On Tue, Aug 13, 2013 at 04:55:00PM +0200, Michael Maier wrote: > Dave Chinner wrote: > > On Mon, Aug 12, 2013 at 06:50:55PM +0200, Michael Maier wrote: > >> Meanwhile, I faced another problem on another xfs-file system with linux > >> 3.10.5 which I never saw before. During writing a few bytes to disc, I > >> got "disc full" and the writing failed. > >> > >> At the same time, df reported 69G of free space! I ran xfs_repair -n and > >> got: > >> > >> > >> xfs_repair -n /dev/mapper/raid0-daten2 > >> Phase 1 - find and verify superblock... > >> Phase 2 - using internal log > >> - scan filesystem freespace and inode maps... > >> sb_ifree 591, counted 492 > >> ^^^^^^^^^^^^^^^^^^^^^^^^^ > >> What does this mean? How can I get rid of it w/o loosing data? This file > >> system was created a few days ago and never resized. > > > > Superblock inode counting is lazy - it can get out of sync in after > > an unclean shutdown, but generally mounting a dirty filesystem will > > result in it being recalculated rather than trusted to be correct. > > So there's nothing to worry about here. > > When will it be self healed? that depends on whether there's actually a problem. Like I said in the part you snipped off - if you ran xfs_repair -n on filesystem that needs log recovery that accounting difference is expected. > I still can see it today after 4 remounts! See what? > This is strange and I can't use the free space, which I need! How can it > be forced to be repaired w/o data loss? The above is complaining about a free inode count mismatch, not a problem about free space being wrong. What problem are you actually having? Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs