Short version: What happens with FS-Cache/cachefilesd as the cache
grows to the size of the NFS filesystem it's caching?
Where I work we've used a network filesystem to achieve Single System
Image for years (actually decades) now. It's been nice, but in the past
few years it seems that network speeds haven't kept up. Add to this the
fact that disk space is pretty darned cheap, and it makes me wonder
about a slightly different operating regime for a network filesystem.
We're not using NFS/FS-Cache at work, but I do for my home cluster,
prompting this question. I suspect it's a generally applicable
question, as well.
I'd like to just have enough disk space on my user-facing system(s) to
hold all of their data. In this situation, the network filesystem
becomes a real-time backup, a way to propagate data to different
systems, a way to keep those systems in sync, and since I mentioned
backup, a central point to run some sort of "time-machine"-like backup
of old files.
Are you aware of limitations with the existing FS-Cache/cachefilesd work
in this type of very-large-cache situation? Do writes to a cached
filesystem write to the cache as well as to the NFS server? Does the
write operation release when the NFS client has the data, or after the
NFS server tells the client that it has it? Does the validation time
for cached data start to dwarf any cache improvements as cache size
increases?
On the side, what's the status of the in-kernel AFS? Last I knew it was
don't-use, just use OpenAFS, but I thought I heard about GSoC work being
done with the in-kernel module.
Thanks,
Dale Pontius
--
Linux-cachefs mailing list
Linux-cachefs@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cachefs