On Mon, Dec 9, 2019 at 7:33 PM Amir Goldstein <amir73il@xxxxxxxxx> wrote: > > On Mon, Dec 9, 2019 at 4:47 PM David Howells <dhowells@xxxxxxxxxx> wrote: > > > > I've been rewriting fscache and cachefiles to massively simplify it and make > > use of the kiocb interface to do direct-I/O to/from the netfs's pages which > > didn't exist when I first did this. > > > > https://lore.kernel.org/lkml/24942.1573667720@xxxxxxxxxxxxxxxxxxxxxx/ > > https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=fscache-iter > > > > I'm getting towards the point where it's working and able to do basic caching > > once again. So now I've been thinking about what it'd take to support > > disconnected operation. Here's a list of things that I think need to be > > considered or dealt with: > > > > (1) Making sure the working set is present in the cache. > > > > - Userspace (find/cat/tar) > > - Splice netfs -> cache > > - Metadata storage (e.g. directories) > > - Permissions caching > > > > (2) Making sure the working set doesn't get culled. > > > > - Pinning API (cachectl() syscall?) > > - Allow culling to be disabled entirely on a cache > > - Per-fs/per-dir config > > > > (3) Switching into/out of disconnected mode. > > > > - Manual, automatic > > - On what granularity? > > - Entirety of fs (eg. all nfs) > > - By logical unit (server, volume, cell, share) > > > > (4) Local changes in disconnected mode. > > > > - Journal > > - File identifier allocation > > - statx flag to indicate provisional nature of info > > - New error codes > > - EDISCONNECTED - Op not available in disconnected mode > > - EDISCONDATA - Data not available in disconnected mode > > - EDISCONPERM - Permission cannot be checked in disconnected mode > > - EDISCONFULL - Disconnected mode cache full > > - SIGIO support? > > > > (5) Reconnection. > > > > - Proactive or JIT synchronisation > > - Authentication > > - Conflict detection and resolution > > - ECONFLICTED - Disconnected mode resolution failed > > - Journal replay > > - Directory 'diffing' to find remote deletions > > - Symlink and other non-regular file comparison > > > > (6) Conflict resolution. > > > > - Automatic where possible > > - Just create/remove new non-regular files if possible > > - How to handle permission differences? > > - How to let userspace access conflicts? > > - Move local copy to 'lost+found'-like directory > > - Might not have been completely downloaded > > - New open() flags? > > - O_SERVER_VARIANT, O_CLIENT_VARIANT, O_RESOLVED_VARIANT > > - fcntl() to switch variants? > > > > (7) GUI integration. > > > > - Entering/exiting disconnected mode notification/switches. > > - Resolution required notification. > > - Cache getting full notification. > > > > Can anyone think of any more considerations? What do you think of the > > proposed error codes and open flags? Is that the best way to do this? > > > > Hi David, > > I am very interested in this topic. > I can share (some) information from experience with a "Caching Gateway" > implementation in userspace shipped in products of my employer, CTERA. > > I have come across several attempts to implement a network fs cache > using overlayfs. I don't remember by whom, but they were asking > questions on overlayfs list about online modification to lower layer. > > It is not so far fetched, as you get many of the requirements for metadata > caching out-of-the-box, especially with recent addition of metacopy feature. > Also, if you consider the plans to implement overlayfs page cache [1][2], > then at least the read side of fscache sounds like it has some things in > common with overlayfs. > > Anyway, you should know plenty about overlayfs to say if you think > there is any room for collaboration between the two projects. > > > [1] https://marc.info/?l=linux-unionfs&m=154995746503505&w=2 > [2] https://github.com/amir73il/linux/commits/ovl-aops-wip David, I have been reading through the fscache APIs and tried to answer this (maybe stupid) question: Why does every netfs need to implement fscache support on its own? fscache support as it is today is extremely intrusive to filesystem code and your re-write doesn't make it any less intrusive. My thinking is: Can't we implement a stackable cachefs which interfaces with fscache and whose API to the netfs is pure vfs APIs, just like overlayfs interfaces with lower fs? The only fscache API I could find that really needs to be called from netfs code is fscache_invalidate() and many of those calls are invoked from vfs ops anyway, so maybe they could also be hoisted to this cachefs. As long as netfs supports direct_IO() (all except afs do) then the active page cache could be that of the stackable cachefs and network IO is always direct from/to cachefs pages. If netfs supports export_operations (all except afs do), then indexing the cache objects could be done in a generic manner using fsid and file handle, just like overlayfs index feature works today. Would it not be a maintenance win if all (or most of) the fscache logic was yanked out of all the specific netfs's? Can you think of reasons why the stackable cachefs model cannot work or why it is inferior to the current fscache integration model with netfs's? Thanks, Amir.