Re: [PATCH v1 3/3] overlay: Add rudimentary checking of writeback errseq on volatile remount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 25, 2020 at 5:36 PM Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
>
> On Wed, Nov 25, 2020 at 04:03:06PM +0200, Amir Goldstein wrote:
> > On Wed, Nov 25, 2020 at 12:46 PM Sargun Dhillon <sargun@xxxxxxxxx> wrote:
> > >
> > > Volatile remounts validate the following at the moment:
> > >  * Has the module been reloaded / the system rebooted
> > >  * Has the workdir been remounted
> > >
> > > This adds a new check for errors detected via the superblock's
> > > errseq_t. At mount time, the errseq_t is snapshotted to disk,
> > > and upon remount it's re-verified. This allows for kernel-level
> > > detection of errors without forcing userspace to perform a
> > > sync and allows for the hidden detection of writeback errors.
> > >
> >
> > Looks fine as long as you verify that the reuse is also volatile.
> >
> > Care to also add the alleged issues that Vivek pointed out with existing
> > volatile mount to the documentation? (unless Vivek intends to do fix those)
>
> I thought current writeback error issue with volatile mounts needs to
> be fixed with shutting down filesystem. (And mere documentation is not
> enough).
>

Documentation is the bare minimum.
If someone implements the shutdown approach that would be best.

> Amir, are you planning to improve your ovl-shutdown patches to detect
> writeback errors for volatile mounts. Or you want somebody else to
> look at it.

I did not intend to work on this.
Whoever wants to pick this up doesn't need to actually implement the
shutdown ioctl, may implement only an "internal shutdown" on error.

>
> W.r.t this patch set, I still think that first we should have patches
> to shutdown filesystem on writeback errors (for volatile mount), and
> then detecting writeback errors on remount makes more sense.
>

I agree that would be very nice, but I can also understand the argument
that volatile mount has an issue, which does not get any better or any
worse as a result of Sargun's patches.

If anything, they improve the situation:
Currently, the user does have a way to know if any data was lost on a
volatile mount.
After a successful mount cycle, the user knows that no data was lost
during the last volatile mount period.

Thanks,
Amir.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux