Re: metadata overhead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 03-09-18 17:20:21, Liu Bo wrote:
> On Mon, Sep 3, 2018 at 2:53 AM, Jan Kara <jack@xxxxxxx> wrote:
> > On Sun 02-09-18 23:58:56, Liu Bo wrote:
> >> My question is,
> >> Is there a way to calculate how much space metadata has occupied?
> >>
> >> So the case I've run into is that 'df /mnt' shows
> >>
> >> Filesystem           1K-blocks      Used Available Use% Mounted on
> >> /dev/sdc              882019232  26517428 811356348   4% /mnt
> >>
> >> but 'du -s /mnt' shows
> >> 13347132        /mnt
> >>
> >> And this is a freshly mounted ext4, so no deleted files or dirty data exist.
> >>
> >> The kernel is quite old (2.6.32), but I was just wondering, could it
> >> be due to metadata using about 13G given the whole filesystem is 842G?
> >
> > Yes, that sounds plausible.
> >
> >> I think it has nothing to do with "Reserved block counts" as df
> >> calculates "Used" in ext4_statfs() by "buf->f_blocks - buf->f_bfree".
> >>
> >> So if there is a way to know the usage of metadata space, via either
> >> manual analysis from the output of dumpe2fs/debugfs or a tool, could
> >> you please suggest?
> >
> > So journal takes up some space. Debugfs command:
> >
> > stat <8>
> >
> > Inode table takes lots of blocks:
> >
> > stats
> >
> > search for "Inode count", multiply by "Inode size". Then there are bitmap
> > blocks - count 2 blocks for each group. The rest of overhead should be
> > pretty minimal.
> >
> 
> Thank you so much for the reply, Jan.
> 
> Per what you've mentioned, the journal + inode table have taken >14G
> in this ext4, so that's a lot of space, good.
> 
> And I digged it further, I found that the overhead from (journal +
> inode_table + block_bitmap) has been excluded in the output of 'df' as
> ext4_statfs() gets buf->f_blocks by
> 
> buf->f_blocks = ext4_blocks_count(es) - EXT4_C2B(sbi, overhead);
> 
> 
> and ->f_blocks is shown as "Total", but there is still some gap
> between "Used" in df (26517428 * 1024) and the summary report of "du
> -s" (13347132 * 1024),

Correct, I forgot about this.

> --------
> # df /mnt
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sdc              882019232  26517428 811356348   4% /mnt
> 
> #du -s /mnt
> 13347132        /mnt
> --------
> 
> Now I'm even more curious, any idea where could those gap come from?

Interesting question. So e.g. for my test filesystem, the gap is caused by
'resize_inode' (inode number 7), which is reserving blocks at the beginning
of block groups to allow for growing of group descriptors. So check whether
you have resize_inode feature enabled (stats command in debugfs).

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux