Re: [PATCH] e2fsck: zero-fill shared blocks by default

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 07, 2021 at 09:23:23PM -0400, Artem Blagodarenko wrote:
> e2fsck has some extended options that provide different ways of
> handling duplicate blocks:
> 
> clone=dup|zero
> shared=preserve|lost+found|delete
> 

This patch isn't applicable to the upstream e2fsprogs, because we
don't support these extended options.

I'd be open to taking commits to support these options, but I'm not
likely to change the default to be "clone=zero".

> When e2fsck detects multiply-claimed blocks, the default repair
> behavior is to clone the duplicate blocks. This is guaranteed to
> result in data corruption, and is also a security hole.

The data corruption occurred when the file system was corrupted.
Changing the default to zero cloned blocks is guaranteed to make
things worse. 

> Typically,
> one of the inodes with multiply-claimed blocks is valid, the others
> have corrupt extent data referencing some of the same disk blocks
> as the valid inode.

True; but when we clone the shared block, one of the files will
hopefully be made whole.  Zeroing means that *both* files are
guranteed to be corrupted.

Can this potentially be a security problem?  Well, it's up to the
system adminsitrator to take a look at the files that were fixed up
during pass1b handling, and decide which file should be fixed up.  If
the system administrator wants to run e2fsck -fy, and then blindly
bring up the system for sharing... that's on the system adminsitrator.
If they don't care to manually inspect the files with shared blocks
first, then sure, perhaps they should edit /etc/e2fsck.conf and change
the cloned or shared behaviour.

It might be that "shared=lost+found" might be a better choice for
them, since /lost+found is mode 700 (owned by root).  "clone=zero"
leaves both files, now guaranteed to be corrupted, left in place for
the user to trip over the corrupted file.  Moving them to lost+found
means the user still has lost access to both files, but least they are
preserved in /lost+found for the system adminsitrator (or the site
security officer if you are running a system with Mandatory Access
Controls) can look them over and them restore them to the user if that
is appropriate/safe.

I'll note, though, that if you have some directory corruption that
causes some file such as /etc/hosts.deny, /etc/iptable/rules or
/etc/ufw/ufw.conf to end up in /lost+found, blindly bringing up the
system after running the e2fsck -fy hammer isn't necessarily going to
be safe, either.  The whole *point* of e2fsck -p is that is
automatically safe, and if it fails to make some kind of fix, it's
because an intelligent human is supposed to drive, and there may be a
need to make manual adjustments to the file system or perhaps tell
e2fsck *not* to make an obvious fix, in the interests of recovering as
much data as possible.

The real problem is people who think running "e2fsck -fy" is
automatically safe and all will be better...

Cheers,

					- Ted



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux