Re: [Linux-cachefs] Re: NFS Patch for FSCache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > for instance, what happens when the client's cache disk is much slower
> > than the server (high performance RAID with high speed networking)?
> 
> Then using a local cache won't help you, no matter how hard it tries, except
> in the following circumstances:
> 
>  (1) The server is not available.
> 
>  (2) The network is heavily used by more than just one machine.
> 
>  (3) The server is very busy.
> 
> > what happens when the client's cache disk fills up so the disk cache is
> > constantly turning over (which files are kicked out of your backing
> > cachefs to make room for new data)?
> 
> I want that to be based on an LRU approach, using last access times. Inodes
> pinned by being open can't be disposed of and neither can inodes pinned by
> being marked so; but anything else is fair game for culling.
> 
> The cache has to be scanned occasionally to build up a list of inodes that are
> candidates for being culled, and I think a certain amount of space must be
> kept available to satisfy allocation requests; therefore the culling needs
> thresholds.
> 
> Unfortunately, culling is going to be slower than allocation in general
> because we always know where we're going to allocate, but we have to search
> for something to get the chop.

I would like to suggest that cache culling be driven by a userspace
daeomon, with LRU usage being used as a fallback approach if the
userspace app doesn't respond fast enough. Or at the least provide a way
to load modules to provide different culling algorithms.

If the server is responding and delivering files faster than we can
write them to local disk and cull space, should we really be caching at
all? Is it even appropriate for the kernel to make that decision?


[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]
  Powered by Linux