Re: [RFC PATCH 0/9] fuse: API for Checkpoint/Restore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 3, 2023 at 8:43 PM Bernd Schubert <bschubert@xxxxxxx> wrote:
>
>
>
> On 2/20/23 20:37, Alexander Mikhalitsyn wrote:
> > Hello everyone,
> >
> > It would be great to hear your comments regarding this proof-of-concept Checkpoint/Restore API for FUSE.
> >
> > Support of FUSE C/R is a challenging task for CRIU [1]. Last year I've given a brief talk on LPC 2022
> > about how we handle files C/R in CRIU and which blockers we have for FUSE filesystems. [2]
> >
> > The main problem for CRIU is that we have to restore mount namespaces and memory mappings before the process tree.
> > It means that when CRIU is performing mount of fuse filesystem it can't use the original FUSE daemon from the
> > restorable process tree, but instead use a "fake daemon".
> >
> > This leads to many other technical problems:
> > * "fake" daemon has to reply to FUSE_INIT request from the kernel and initialize fuse connection somehow.
> > This setup can be not consistent with the original daemon (protocol version, daemon capabilities/settings
> > like no_open, no_flush, readahead, and so on).
> > * each fuse request has a unique ID. It could confuse userspace if this unique ID sequence was reset.
> >
> > We can workaround some issues and implement fragile and limited support of FUSE in CRIU but it doesn't make any sense, IMHO.
> > Btw, I've enumerated only CRIU restore-stage problems there. The dump stage is another story...
> >
> > My proposal is not only about CRIU. The same interface can be useful for FUSE mounts recovery after daemon crashes.
> > LXC project uses LXCFS [3] as a procfs/cgroupfs/sysfs emulation layer for containers. We are using a scheme when
> > one LXCFS daemon handles all the work for all the containers and we use bindmounts to overmount particular
> > files/directories in procfs/cgroupfs/sysfs. If this single daemon crashes for some reason we are in trouble,
> > because we have to restart all the containers (fuse bindmounts become invalid after the crash).
> > The solution is fairly easy:
> > allow somehow to reinitialize the existing fuse connection and replace the daemon on the fly
> > This case is a little bit simpler than CRIU cause we don't need to care about the previously opened files
> > and other stuff, we are only interested in mounts.
>

Hello, Bernd!

Thanks a lot for your attention/review to this patch series!

>
> I like your patches, small and easy to read :)

Glad to hear, thanks! ;-)

> So this basically fails all existing open files - our (future) needs go
> beyond that. I wonder if we can extend it later and re-init the new
> daemon with something like "fuse_queue_recall" - basically the opposite
> of fuse_queue_forget. Not sure if fuse can access the vfs dentry cache
> to know for which files that would need to be done - if not, it would
> need to do its own book-keeping.
>

I thought about this (problem with existing opened FDs) too, it's just
a first approach to the problem and as far as I mentioned I have no
CRIU-side implementation (so it's not tested with full
Checkpoint/Restore),
but it works well for "fuse reinitialization" (which is needed for
LXCFS and can be useful for other fuse filesystems).

I think we can easily extend this later if we come up with some
agreement about generic UAPI.

I hope that more people will react to this and post their opinions.

Kind regards,
Alex

>
> Thanks,
> Bernd




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux