Excessive file descriptor usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi Joel,

I fixed some fd-leaks in open-behind xlator after 3.4 release.

Could you do "gluster volume set <volname> performance.open-behind off" and see if open-fd-count still increases periodically.

Commit information:

commit 8c1304b03542eefbbff82014827fc782c3c3584f
Author: Pranith Kumar K <pkarampu at redhat.com>
Date:   Sat Aug 3 08:27:27 2013 +0530

    performance/open-behind: Fix fd-leaks in unlink, rename
    
    Change-Id: Ia8d4bed7ccd316a83c397b53b9c1b1806024f83e
    BUG: 991622
    Signed-off-by: Pranith Kumar K <pkarampu at redhat.com>
    Reviewed-on: http://review.gluster.org/5493
    Tested-by: Gluster Build System <jenkins at build.gluster.com>
    Reviewed-by: Anand Avati <avati at redhat.com>

Pranith

----- Original Message -----
> From: "Joel Stalder" <jstalder at panterranetworks.com>
> To: gluster-users at gluster.org
> Sent: Monday, October 28, 2013 11:06:22 PM
> Subject: Excessive file descriptor usage
> 
> 
> 
> 
> 
> Folks,
> 
> 
> 
> I?m using Gluster 3.4 on CentOS 6? very simple two-server, two-brick (replica
> 2) setup. The volume itself has many small files across a reasonably large
> directory tree, though I?m not sure if that plays a role. The FUSE client is
> being used.
> 
> 
> 
> I can live with the performance limitations of small files with Gluster, but
> the problem I?m having is that file descriptor usage on the glusterfs
> servers just continues to grow? not sure when it might actually top off, if
> ever. No rebalance has been or is running. The application running on the
> client servers (two) are not leaving the files open.
> 
> 
> 
> I?ve tuned Linux behavior on the glusterfs servers, via /proc, to accept over
> 1 million per-process file descriptors, but that doesn?t seem to be enough.
> This volume hit the FD max some time ago and had to be recovered? I thought
> it was a fluke so started watching the open FD count and see that it?s
> growing again.
> 
> 
> 
> # gluster volume top users open
> 
> Brick: node-75:/storage/users
> 
> Current open fds: 765651, Max open fds: 1048558, Max openfd time: 2013-10-02
> 22:26:18.327010
> 
> 
> 
> Brick: node-76:/storage/users
> 
> Current open fds: 768936, Max open fds: 768938, Max openfd time: 2013-10-28
> 17:11:04.184964
> 
> 
> 
> Clients:
> 
> 
> 
> # cat /proc/sys/fs/file-nr
> 
> 5100 0 1572870
> 
> 
> 
> # cat /proc/sys/fs/file-nr
> 
> 2550 0 1572870
> 
> 
> 
> 
> 
> Looking for thoughts or suggestions here. Anyone else encountered this? Is
> the recommended solution to just define a ridiculously high per-process and
> global file descriptor max?
> 
> 
> 
> -Joel
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux