Re: [Lsf-pc] [LSF/MM TOPIC] Network filesystem cache management system call

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2017-01-20 at 17:45 +0000, David Howells wrote:
> Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> 
> > I think it might be more useful to wire posix_fadvise into the
> > filesystem drivers somehow. A hinting interface really seems like the
> > right approach here, given the differences between different
> > filesystems.
> 
> The main reason I'm against using an fd-taking interface is that the object to
> be affected might not be a regular file and could even be mounted over.
> 

How would you disambiguate the mounted-over case with a path-based
interface?

> > >  (*) VIOCGETCACHEPARMS
> > > 
> > >      Get the size of the cache.
> > > 
> > 
> > Global or per-inode cache?
> 
> I think this would have to be whatever cache the target inode is lurking
> within.  fscache permits multiple caches on a system.
> 

Ok, but does this tell you "how big is this entire cache?" or "how much
cache does this inode currently consume" ? Both could be useful...

> > >  (*) VIOC_FLUSHVOLUME
> > > 
> > >      Flush all cached state for a volume, both from RAM and local disk cache
> > >      as far as possible.  Files that are open aren't necessarily affected.
> > > 
> > 
> > Maybe POSIX_FADV_DONTNEED on the mountpoint?
> 
> Ugh.  No.  How would you differentiate flushing just the mountpoint or the
> root dir from flushing the volume?  Also, you cannot open the mountpoint
> object if it is mounted over.
> 

Good point.

I don't know...this kind of thing might be better suited to a sysfs-
style interface, honestly. Anything where you're dealing at the level
of an entire fs doesn't benefit much from a per-inode syscall
interface. That said, that could get messy once you start dealing with
namespaces and such.

> Also POSIX_FADV_DONTNEED is a hint that an application no longer needs the
> data and is not a specifically a command to flush that data.
> 
> > >  (*) VIOC_FLUSHALL
> > > 
> > >      FLush all cached state for all volumes.
> > > 
> > 
> > How would you implement that in a generic way? Suppose I have a mix of
> > AFS and NFS mountpoints and issue this via some mechanism. Is
> > everything going to drop their caches?
> > 
> > Might want to punt on this one or do it with a private, AFS-only ioctl.
> 
> Might be worth making it AFS-only.  Possibly it would make sense to implement
> it in userspace using VIOC_FLUSHVOLUME and iterating over /proc/mounts, but
> that then begs the question of whether this should be affected by namespaces.
> 
> > POSIX_FADV_WILLNEED ?
> 
> Perhaps.
> 
> > Does AFS allow remote access to devices a'la CIFS?
> 
> No. :-)
> 

I'm not sure I get why it's terribly useful to manipulate the cache on
a symlink or device file itself. There's generally not much cached
anyway, right (generally nothing more than a page anyway).

> > Could we allow posix_fadvise on O_PATH opens?  For symlinks there is always
> > O_NOFOLLOW.
> 
> Maybe.  Al?
> 
> This doesn't work for mounted-over mountpoints, however.  I guess we could add
> AT_NO_FOLLOW_MOUNTS to get the basalmost mountpoint.
> 

Yeah, perhaps.
-- 
Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux