Re: Phantom full ext4 root filesystems on 4.1 through 4.14 kernels

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you try using "df -i" when the file system looks full, and then
reboot, and look at the results of "df -i" afterwards?

Also interesting would be to grab a metadata-only snapshot of the file
system when it is in its mysteriously full state, writing that
snapshot on some other file system *other* than on /dev/sda3:

     e2image -r /dev/sda3 /mnt/sda3.e2i

Then run e2fsck on it:

e2fsck -fy /mnt/sda3.e2i

What I'm curious about is how many "orphaned inodes" are reported, and
how much space they are taking up.  That will look like this:

% gunzip < /usr/src/e2fsprogs/tests/f_orphan/image.gz  > /tmp/foo.img
% e2fsck -fy /tmp/foo.img
e2fsck 1.45.2 (27-May-2019)
Clearing orphaned inode 15 (uid=0, gid=0, mode=040755, size=1024)
Clearing orphaned inode 17 (uid=0, gid=0, mode=0100644, size=0)
Clearing orphaned inode 16 (uid=0, gid=0, mode=040755, size=1024)
Clearing orphaned inode 14 (uid=0, gid=0, mode=0100644, size=69)
Clearing orphaned inode 13 (uid=0, gid=0, mode=040755, size=1024)
...

It's been theorized the bug is in overlayfs, where it's holding inodes
open so the space isn't released.  IIRC somewhat had reported a
similar problem with overlayfs on top of xfs.  (BTW, are you using
overlayfs or aufs with your Docker setup?)

		     	       	      - Ted



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux