Re: [PATCH v7] overlayfs: Provide a mount option "volatile" to skip sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Nov 07, 2020 at 11:52:27AM +0000, Sargun Dhillon wrote:
> On Sat, Nov 07, 2020 at 11:35:04AM +0200, Amir Goldstein wrote:
> > On Fri, Nov 6, 2020 at 9:43 PM Giuseppe Scrivano <gscrivan@xxxxxxxxxx> wrote:
> > >
> > > Vivek Goyal <vgoyal@xxxxxxxxxx> writes:
> > >
> > > > On Fri, Nov 06, 2020 at 09:58:39AM -0800, Sargun Dhillon wrote:
> > > >
> > > > [..]
> > > >> There is some slightly confusing behaviour here [I realize this
> > > >> behaviour is as intended]:
> > > >>
> > > >> (root) ~ # mount -t overlay -o
> > > >> volatile,index=off,lowerdir=/root/lowerdir,upperdir=/root/upperdir,workdir=/root/workdir
> > > >> none /mnt/foo
> > > >> (root) ~ # umount /mnt/foo
> > > >> (root) ~ # mount -t overlay -o
> > > >> volatile,index=off,lowerdir=/root/lowerdir,upperdir=/root/upperdir,workdir=/root/workdir
> > > >> none /mnt/foo
> > > >> mount: /mnt/foo: wrong fs type, bad option, bad superblock on none,
> > > >> missing codepage or helper program, or other error.
> > > >>
> > > >> From my understanding, the dirty flag should only be a problem if the
> > > >> existing overlayfs is unmounted uncleanly. Docker does
> > > >> this (mount, and re-mounts) during startup time because it writes some
> > > >> files to the overlayfs. I think that we should harden
> > > >> the volatile check slightly, and make it so that within the same boot,
> > > >> it's not a problem, and having to have the user clear
> > > >> the workdir every time is a pain. In addition, the semantics of the
> > > >> volatile patch itself do not appear to be such that they
> > > >> would break mounts during the same boot / mount of upperdir -- as
> > > >> overlayfs does not defer any writes in itself, and it's
> > > >> only that it's short-circuiting writes to the upperdir.
> > > >
> > > > umount does a sync normally and with "volatile" overlayfs skips that
> > > > sync. So a successful unmount does not mean that file got synced
> > > > to backing store. It is possible, after umount, system crashed
> > > > and after reboot, user tried to mount upper which is corrupted
> > > > now and overlay will not detect it.
> > > >
> We explicitly disable this in our infrastructure via a small kernel patch that 
> stubs out the sync behaviour. IIRC, it was added some time after 4.15, and when 
> we picked up the related overlayfs patch it caused a lot of machines to crash.
> 
> This was due to high container churn -- and other containers having a lot of 
> outstanding dirty pages at exit time. When we would teardown their mounts, 
> syncfs would get called [on the entire underlying device / fs], and that would 
> stall out all of the containers on the machine. We really do not want this 
> behaviour.
> 
> > > > You seem to be asking for an alternate option where we disable
> > > > fsync() but not syncfs. In that case sync on umount will still
> > > > be done. And that means a successful umount should mean upper
> > > > is fine and it could automatically remove incomapt dir upon
> > > > umount.
> > >
> > > could this be handled in user space?  It should still be possible to do
> > > the equivalent of:
> > >
> > > # sync -f /root/upperdir
> > > # rm -rf /root/workdir/incompat/volatile
> > >
> > 
> > FWIW, the sync -f command above is
> > 1. Not needed when re-mounting overlayfs as volatile
> > 2. Not enough when re-mounting overlayfs as non-volatile
> > 
> > In the latter case, a full sync (no -f) is required.
> > 
> > Handling this is userspace is the preferred option IMO,
> > but if there is an *appealing* reason to allow opportunistic
> > volatile overlayfs re-mount as long as the upperdir inode
> > is in cache (userspace can make sure of that), then
> > all I am saying is that it is possible and not terribly hard.
> > 
> > Thanks,
> > Amir.
> 
> 
> I think I have two approaches in mind that are viable. Both approaches rely on 
> adding a small amount of data (either via an xattr, or data in the file itself) 
> that allows us to ascertain whether or not the upperdir is okay to reuse, even 
> when it was mounted volatile:
> 
> 1. We introduce a guid to the superblock structure itself. I think that this 
> would actually be valuable independently from overlayfs in order to do things 
> like "my database restarted, should it do an integrity check, or is the same SB 
> mounted?" I started down the route of cooking up an ioctl for this, but I think 
> that this is killing a mosquito with a canon. Perhaps, this approach is the 
> right long-term approach, but I don't think it'll be easy to get through.
> 

> 2. I've started cooking up this patch a little bit more where we override 
> kill_sb. Specifically, we assign kill_sb on the upperdir / workdir to our own 
> killsb, and keep track of superblocks, and the errseq on the super block. We 
> then keep a list of tracked superblocks in memory, the last observed errseq, and 
> a guid. Upon mount of the overlayfs, we write the a key in that uniquely 
> identifies the sb + errseq. Upon remount, we check if the errseq, or if the sb 
> have changed. If so, we throw an error. Otherwise, we allow things to pass.
> 
> This approach has seen some usage in net[1].

So what happens if system crashed and you are booting back. You throw
away all the containers?

The mechanism you described above sounds like you want to detect writeback
errors during next mount and fail that mount (and possibly throw away
the container)?

If I start 5 containers and mount overlay with volatile and these
containers exit. Later say 4 new contaieners were started and some
error happened in writeback. Now if I restart any of the first
5 containers, they all will see the error, right? And they all
will fail to start. Is that what you are trying to achieve. Or I missed
the point completely.

Thanks
Vivek




[Index of Archives]     [Linux Filesystems Devel]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux