On 04.01.2013 16:35, pille wrote: > hi, > > # du -hs file copy > 128M file !! > 100M copy I think you are bitten by "speculative preallocation". When XFS "thinks" you will extent the file in the future it speculativly allocates space. This prevents fragmentation. This fixes itself over time. Either do enough IO that the cache of the copy gets reused or umount or "echo 3 > /proc/sys/vm/drop_caches" (The last one drops the whole cache (except dirty pages) of Linux!) If it is a problem, the behaviour can be disabled by using the mount-option "allocsize", for example: "allocsize=4k" -- Matthias _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs