Dear All- I find that I have to restart glusterd every few days on my servers to stop NFS performance from becoming unbearably slow. When the problem occurs, volumes can take several minutes to mount and there are long delays responding to "ls". Mounting from a different server, i.e. one not normally used for NFS export, results in normal NFS access speeds. This doesn't seem to have anything to do with load because it happens whether or not there is anything running on the compute servers. Even when the system is mostly idle there are often a lot of glusterfsd processes running, and on several of the servers I looked at this evening there is a process called glusterfs using 100% of one CPU. I can't find anything unusual in nfs.log or etc-glusterfs-glusterd.vol.log on the servers affected. Restarting glusterd seems to stop this strange behaviour and make NFS access run smoothly again, but this usually only lasts for a day or two. This behaviour is not necessarily related to the length of time since glusterd was started, but has more to do with the amount of work the GlusterFS processes on each server have to do. I use a different server to export each of my 8 different volumes, and the NFS performance degradation seems to affect the most heavily used volumes more than the others. I really need to find a solution to this problem; all I can think of doing is setting up a cron job on each server to restart glusterd every day, but I am worried about what side effects that might have. I am using GlusterFS version 3.2.5. All suggestions would be much appreciated. Regards, Dan.