Re: metadata overhead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Sun 02-09-18 23:58:56, Liu Bo wrote:
> My question is,
> Is there a way to calculate how much space metadata has occupied?
> 
> So the case I've run into is that 'df /mnt' shows
> 
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sdc              882019232  26517428 811356348   4% /mnt
> 
> but 'du -s /mnt' shows
> 13347132        /mnt
> 
> And this is a freshly mounted ext4, so no deleted files or dirty data exist.
> 
> The kernel is quite old (2.6.32), but I was just wondering, could it
> be due to metadata using about 13G given the whole filesystem is 842G?

Yes, that sounds plausible.

> I think it has nothing to do with "Reserved block counts" as df
> calculates "Used" in ext4_statfs() by "buf->f_blocks - buf->f_bfree".
> 
> So if there is a way to know the usage of metadata space, via either
> manual analysis from the output of dumpe2fs/debugfs or a tool, could
> you please suggest?

So journal takes up some space. Debugfs command:

stat <8>

Inode table takes lots of blocks:

stats

search for "Inode count", multiply by "Inode size". Then there are bitmap
blocks - count 2 blocks for each group. The rest of overhead should be
pretty minimal.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux