50% more blocks allocated than needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

I've had some files on an XFS filesystem that use way
more blocks than they need.

That is, st_size=50MB and st_blocks*512= about 68MB.

The files were downloaded with wget on a somewhat unreliable 3G
connection (50-500kBps) and sometimes defragging (xfs_fsr) fixes it,
but
sometimes not.

If st_blocks*512<st_size then it could be sparse files, but this is
the opposite. So... preallocation?

"df" before and after deleting a bunch of these files shows that the
st_blocks is what it cares about. Would the preallocation (if that's
what it is) be reclaimed if the fs started to run out of space?

If not preallocation, then what?

Note st_blocks is 134656 and xfs_bmap shows 97663 blocks.

$ ls -l foo
-rw-r--r-- 1 root root 50000000 Jan 29 01:32 foo
$ du -h foo
66M foo
$ stat foo
  File: `foo'
  Size: 50000000   Blocks: 134656     IO Block: 4096   regular file
Device: fe04h/65028d Inode: 68688483    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2013-01-30 18:56:18.603278346 +0000
Modify: 2013-01-29 01:32:51.000000000 +0000
Change: 2013-01-31 11:38:10.892330145 +0000
 Birth: -
$ xfs_bmap foo
foo:
0: [0..97663]: 44665840..44763503

--
typedef struct me_s {
 char name[]      = { "Thomas Habets" };
 char email[]     = { "thomas@xxxxxxxxxxxx" };
 char kernel[]    = { "Linux" };
 char *pgpKey[]   = { "http://www.habets.pp.se/pubkey.txt"; };
 char pgp[] = { "A8A3 D1DD 4AE0 8467 7FDE  0945 286A E90A AD48 E854" };
 char coolcmd[]   = { "echo '. ./_&. ./_'>_;. ./_" };
} me_t;

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux