On Tue, May 27, 2014 at 4:06 AM, Albe Laurenz <laurenz.albe@xxxxxxxxxx> wrote:
I just learned that NFS does not use a file system cache on the client side.
My experience suggested that it did something a little weirder than that. It would cache read data as long as it was clean, but once the data was dirtied and written back, it would drop it from the cache. But it probably depends on lot of variables and details I don't recall anymore.
On the other hand, PostgreSQL relies on the file system cache for performance,
because beyond a certain amount of shared_buffers performance will suffer.
Some people have some problems sometimes, and they are not readily reproducible (at least not in a publicly disclosable way, that I know of). Other people use large shared_buffers and have no problems at all, or none that are fixed by lowering shared_buffers.
We should not elevate a rumor to a law.
Together these things seem to indicate that you cannot get good performance
with a large database over NFS since you can leverage memory speed.
Now I wonder if there are any remedies (CacheFS?) and what experiences
people have made with the performance of large databases over NFS.
I've only used it in cases where I didn't consider durability important, and even then didn't find it worth pursuing due to the performance. But I was piggybacking on existing resource, I didn't have an impressive NFS server tuned specifically for this usage, so my experience probably doesn't mean much performance wise.
Cheers,
Jeff