Dave Chinner wrote: > On Mon, Aug 12, 2013 at 06:50:55PM +0200, Michael Maier wrote: >> Meanwhile, I faced another problem on another xfs-file system with linux >> 3.10.5 which I never saw before. During writing a few bytes to disc, I >> got "disc full" and the writing failed. >> >> At the same time, df reported 69G of free space! I ran xfs_repair -n and >> got: >> >> >> xfs_repair -n /dev/mapper/raid0-daten2 >> Phase 1 - find and verify superblock... >> Phase 2 - using internal log >> - scan filesystem freespace and inode maps... >> sb_ifree 591, counted 492 >> ^^^^^^^^^^^^^^^^^^^^^^^^^ >> What does this mean? How can I get rid of it w/o loosing data? This file >> system was created a few days ago and never resized. > > Superblock inode counting is lazy - it can get out of sync in after > an unclean shutdown, but generally mounting a dirty filesystem will > result in it being recalculated rather than trusted to be correct. > So there's nothing to worry about here. When will it be self healed? I still can see it today after 4 remounts! This is strange and I can't use the free space, which I need! How can it be forced to be repaired w/o data loss? Thanks, kind regards, Michael _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs