Re: Sudden performance drop in gluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, Apr 14, 2017 at 3:35 PM, Pat Haley <phaley@xxxxxxx> wrote:

This seems to have cleared itself.  For future reference though, what kinds of things should I look at do diagnose an issue like this?


Turning on gluster volume profile [1] and sampling the output of profile info at periodic intervals would help. In addition you could also strace the glusterfsd process and/or use `perf record` to determine what the process is doing.

HTH,
Vijay

[1] https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/



Thanks



On 04/14/2017 01:16 PM, Pat Haley wrote:

Hi,

Today we suddenly experienced a performance drop in gluster:  e.g. doing an "ls" of a directory with about 20 files takes about 5 minutes.  This is way beyond (and seem separate from) some previous concerns we had.

Our gluster filesystem is two bricks hosted on a single server. Logging onto that server and doing "top" shows a load average of ~30.  In general, no process is showing significant CPU usage except an occasional flash a 3300% from glusterfsd.  The rest of our system is not doing any exceptional data demands on the file system (i.e. we aren't suddenly running more jobs than we were yesterday).

Any thoughts on how we can proceed with debugging this will be greatly appreciated.

Some additional information:

glusterfs 3.7.11 built on Apr 27 2016 14:09:22
CentOS release 6.8 (Final)


[root@mseas-data2 ~]# gluster volume status data-volume
Status of volume: data-volume
Gluster process                             TCP Port  RDMA Port Online  Pid
------------------------------------------------------------------------------
Brick mseas-data2:/mnt/brick1               49154 0 Y       5021
Brick mseas-data2:/mnt/brick2               49155 0 Y       5026

Task Status of Volume data-volume
------------------------------------------------------------------------------
Task                 : Rebalance
ID                   : 892d9e3a-b38c-4971-b96a-8e4a496685ba
Status               : completed


[root@mseas-data2 ~]# gluster volume info data-volume

Volume Name: data-volume
Type: Distribute
Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: mseas-data2:/mnt/brick1
Brick2: mseas-data2:/mnt/brick2
Options Reconfigured:
diagnostics.brick-sys-log-level: WARNING
performance.readdir-ahead: on
nfs.disable: on
nfs.export-volumes: off



--

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley                          Email:  phaley@xxxxxxx
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux