There was fdleak in 3.4.0, if you are running that you might consider upgrading to 3.4.1 or disable open-behind feature: https://bugzilla.redhat.com/show_bug.cgi?id=991622 On Mon, Oct 28, 2013 at 6:36 PM, Joel Stalder <jstalder at panterranetworks.com > wrote: > ** ** > > Folks,**** > > ** ** > > I?m using Gluster 3.4 on CentOS 6? very simple two-server, two-brick > (replica 2) setup. The volume itself has many small files across a > reasonably large directory tree, though I?m not sure if that plays a role. > The FUSE client is being used. **** > > ** ** > > I can live with the performance limitations of small files with Gluster, > but the problem I?m having is that file descriptor usage on the glusterfs > servers just continues to grow? not sure when it might actually top off, if > ever. No rebalance has been or is running. The application running on the > client servers (two) are not leaving the files open.**** > > ** ** > > I?ve tuned Linux behavior on the glusterfs servers, via /proc, to accept > over 1 million per-process file descriptors, but that doesn?t seem to be > enough. This volume hit the FD max some time ago and had to be recovered? I > thought it was a fluke so started watching the open FD count and see that > it?s growing again.**** > > ** ** > > # gluster volume top users open**** > > Brick: node-75:/storage/users**** > > Current open fds: 765651, Max open fds: 1048558, Max openfd time: > 2013-10-02 22:26:18.327010**** > > ** ** > > Brick: node-76:/storage/users**** > > Current open fds: 768936, Max open fds: 768938, Max openfd time: > 2013-10-28 17:11:04.184964**** > > ** ** > > Clients:**** > > ** ** > > # cat /proc/sys/fs/file-nr**** > > 5100 0 1572870**** > > ** ** > > # cat /proc/sys/fs/file-nr**** > > 2550 0 1572870**** > > ** ** > > ** ** > > Looking for thoughts or suggestions here. Anyone else encountered this? Is > the recommended solution to just define a ridiculously high per-process and > global file descriptor max? **** > > ** ** > > -Joel**** > > ** ** > > ** ** > > ** ** > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131028/bbd0c889/attachment.html>