=?gb18030?b?16q3oqO6ICBhIHdvcmthcm91bmQgZm9yIGhp?==?gb18030?q?ght_cpu_load?=

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




the attachment's url is:     http://pan.baidu.com/s/1pKe4GdX, 

------------------ 原始邮件 ------------------
发件人: "Norbert";<norbert.huang@xxxxxx>;
发送时间: 2016年3月23日(星期三) 晚上6:42
收件人: "gluster-users"<gluster-users@xxxxxxxxxxx>;
主题:  [Gluster-users] a workaround for hight cpu load


 

the attachments  is the result of perf when cpu load is  500%~800%, it shows "get xattr  operation "  cost  most  cpu time.


------------------ 原始邮件 ------------------
发件人: "Russell Purinton";<russell.purinton@xxxxxxxxx>;
发送时间: 2016年3月23日(星期三) 下午4:06
收件人: "Norbert"<norbert.huang@xxxxxx>;
抄送: "gluster-users"<gluster-users@xxxxxxxxxxx>;
主题: Re: [Gluster-users] a workaround for hight cpu load

I’m only about 90% sure about this, but I think the Self Heal Daemon is a glusterfs process that must communicate with glusterfsd for healing to take place. blocking communications like this is basically breaking the Self Heal Daemon and your files are probably not being healed correctly.

Healing requires high CPU due to the default “diff” algorithm which will actually read the files and send only the differences over the network. You can trade CPU impact for Network Bandwidth by using the ‘full’ algorithm, then it doesn’t need to compute info on the files themselves.

Can anyone confirm if I’m correct about this?


On Mar 22, 2016, at 11:05 PM, Norbert <norbert.huang@xxxxxx> wrote:


problem: when self-heal is running with many files, it make cpu high load and gluster client cannot write new file and read file.

as described at: http://www.gluster.org/pipermail/gluster-users/2015-November/024228.html

from following tests, I realize that: when close tcp connections between glusterfs and glusterfsd in gluster server,it can reduce cpu load and not affect gluster client to write and read file.




========================= config begin ==================================
gluster 3.5.1

#gluster volume status
Status of volume: tr5
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.5.252:/home/gluster351/r15 49157 Y 11899
Brick 192.168.5.76:/home/gluster/r15 49152 Y 26692
NFS Server on localhost N/A N N/A
Self-heal Daemon on localhost N/A Y 11918
NFS Server on 192.168.5.76 N/A N N/A
Self-heal Daemon on 192.168.5.76 N/A Y 26705

Task Status of Volume tr5
------------------------------------------------------------------------------
There are no active volume tasks


gluster client: 192.168.16.207:1019
# mount
192.168.5.252:/tr5 on /home/mariadb/data/tr5 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

========================= config end ==================================


========================= test 1 begin =================================
at 192.168.5.252,run:
iptables -A INPUT -p tcp --dport 49157 -j DROP -s 192.168.5.252
iptables -A INPUT -p tcp --dport 49157 -j DROP -s 192.168.5.76

at 192.168.5.76,run:
iptables -A INPUT -p tcp --dport 49152 -j DROP -s 192.168.5.252
iptables -A INPUT -p tcp --dport 49152 -j DROP -s 192.168.5.76

gluster client can write and read files as normal.
========================= test 1 end ==================================


========================= test 2 begin ==================================
1: at 192.168.5.76, shutdown process: glusterd, glusterfsd, glusterfs.
2: at gluster client, copy 10k files to gluster server.
3: at 192.168.5.252, there are 10k+ link file under directory /home/gluster351/r15/.glusterfs/indices/xattrop.
4: at 192.168.5.76, start gluster service. then self-heal begins.
5: during self-heal, at 192.168.5.252 , glusterfs %CPU 5.7 , glusterfsd %CPU 7.0

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11918 root 20 0 321m 36m 2816 S 7.0 0.5 1:28.07 glusterfs
11899 root 20 0 824m 42m 3164 S 5.7 0.5 20:43.39 glusterfsd

6: during self-heal, at 192.168.5.76 , glusterfs %CPU 0 , glusterfsd %CPU 6.0
7: during self-heal, at 192.168.5.76 , run:
iptables -A INPUT -p tcp --dport 49152 -j DROP -s 192.168.5.252
iptables -A INPUT -p tcp --dport 49152 -j DROP -s 192.168.5.76

after run iptables, at 192.168.5.76 and 192.168.5.252, both glusterfs %CPU 0 , glusterfsd %CPU 0
8: at 192.168.5.252, there are 7000+ link file under directory /home/gluster351/r15/.glusterfs/indices/xattrop.
9: gluster client can write and read files as normal.
========================= test 2 begin ==================================

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux