I have about a half a dozen nginx servers sitting in front of Gluster (3.4.6, I know it's old) serving a mix videos and images. It's a moderate amount of traffic, each of two geo-repped sites will do 2-4 gigs/second throughout the day.
Here's the problem. Reads from gluster, due to the way nginx buffers the video, can far exceed what's being served out to the internet. Serving 1 gig of video may read 3 gigs from Gluster.
I can fix this by setting the performance cache on the volume to a pretty large size, right now it's at 2 gigs. This works great, gluster uses 1.5 - 2 gigs of RAM and the in/out bandwidth on the nginx machines becomes a healthy 1:1 or better.
For a few days. Over time, as the machines vfs cache fills, gluster starts to use less RAM, and that ratio gets worse. Rebooting the nginx boxes (or I presume, simply dropping their caches) fixes it immediately.
I'm going to try increasing vfs.cache_pressure on the nginx boxes, as this doc recommends:
Does that make sense to tune this on the clients? Is Gluster competing with the kernel cache? That's sort of my understanding but I can't find a clear explanation.
Other recommendations would be welcome, though tweaking the direct-io options is unfortunately not an option in my setup.
-Matt
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users