Re: df returns incorrect size of partition due to huge overhead block count in ext4 partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ted,

Thanks for the response. Really appreciate it. Some questions:

a) This issue is observed on one of the customer board and hence a fix
is a must for us or at least I will need to do a work-around so other
customer boards do not face this issue. As I mentioned my script
relies on df -h output of used percentage. In the case of the board
reporting 16Z of used space and size, the available space is somehow
reported correctly. Should my script rely on available space and not
on the used space% output of df. Will that be a reliable work-around?
Do you see any issue in using the partition from then or some where
down the line the overhead blocks number would create a problem and my
partition would end up misbehaving or any sort of data loss could
occur? Data loss would be a concern for us. Please guide.

//* More info on my script: I have a script which monitors the used
percentage of the partition using df -h command and when the used
percentage is greater than 70%, it deletes files until the used
percentage comes down. Considering df
is reporting all the time 100% usage, all my files get deleted.*//

b) Any other suggestions of a work-around so even if the overhead
blocks reports more blocks than actual blocks on the partition, i am
able to use the partition reliably or do you think it would be a
better suggestion to wait for the fix in e2fsprogs?

I think apart from the fix in e2fsprogs tool, a kernel fix is also
required, wherein it performs check that the overhead blocks should
not be greater than the actual blocks on the partition.

Regards

On Sat, Mar 26, 2022 at 3:41 AM Theodore Ts'o <tytso@xxxxxxx> wrote:
>
> On Fri, Mar 25, 2022 at 12:12:30PM +0530, Fariya F wrote:
> > The output dumpe2fs returns the following
> >
> >     Block count:              102400
> >     Reserved block count:     5120
> >     Overhead blocks:          50343939
>
> Yeah, that value is obviously wrong; I'm not sure how it got
> corrupted, but that's the cause of the your problem.
>
> > a) Where does overhead blocks get set?
>
> The kernel can calculate the overhead value, but it can be slow for
> very large file systems.  For that reason, it is cached in the
> superblock.  So if the s_overhead_clusters is zero, the kernel will
> calculate the overhead value, and then update the superblock.
>
> In newer versions of e2fsprogs, mkfs.ext4 / mke2fs will write the
> overhead value into the superblock.
>
> > b) Why is this value huge for my partition and how to correct it
> > considering fsck is also not correcting this
>
> The simpleest way is to run the following command with the file system
> unmounted:
>
> debugfs -w -R "set_super_value overhead_clusters 0" /dev/sdXX
>
> Then the next time you mount the file system, the correct value should
> get caluclated and filled in.
>
> It's a bug that fsck isn't notcing the problem and correcting it.
> I'll work on getting that fixed in a future version of e2fsprogs.
>
> My apologies for the inconvenience.
>
> Cheers,
>
>                                         - Ted



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux