Hi Team,
I have two board setup and on which we have one volume with two brick on each board.PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3117 root 20 0 1429m 48m 3636 R 98 0.2 212:12.35 fpipt_main_thre
12299 root 20 0 1139m 52m 4192 R 73 0.2 120:41.69 glusterfsd
4517 root 20 0 1139m 52m 4192 S 72 0.2 121:01.54 glusterfsd
1915 root 20 0 1139m 52m 4192 R 62 0.2 121:16.22 glusterfsd
14633 root 20 0 1139m 52m 4192 S 62 0.2 120:37.13 glusterfsd
1992 root 20 0 634m 154m 4340 S 57 0.7 68:11.18 glusterfs
17886 root 20 0 1139m 52m 4192 R 55 0.2 120:28.57 glusterfsd
2664 root 20 0 783m 31m 4708 S 52 0.1 100:13.12 Scc_SctpHost_pr
1914 root 20 0 1139m 52m 4192 S 50 0.2 121:20.19 glusterfsd
12556 root 20 0 1139m 52m 4192 S 50 0.2 120:31.38 glusterfsd
1583 root 20 0 1139m 52m 4192 R 48 0.2 121:16.83 glusterfsd
12112 root 20 0 1139m 52m 4192 R 43 0.2 120:58.73 glusterfsd
Is there any way to identify the way or to reduce this high load.
I have also collected the volume profile logs but don't know how to understand or analyze those logs.
I am attaching those logs here.
--
Regards
Abhishek Paliwal
Abhishek Paliwal
Log start: 161129-130518 - 10.67.29.150 - moshell 16.0y - /home/emamiko/EVO8300/Issues/CPU_highLoad_C1MP/Gluster_vol_profile.txt EVOA_8300-1> EVOA_8300-1> gluster volume profile c_glusterfs start 161129-13:05:26 10.67.29.150 16.0y CPP_MOM-CPP-LSV203-gen2_gen2_COMPLETE stopfile=/tmp/15640 $ gluster volume profile c_glusterfs start Starting volume profile on c_glusterfs has been successful $ EVOA_8300-1> EVOA_8300-1> # wait for a minute or two EVOA_8300-1> wait 120 Waiting from [2016-11-29 13:05:31] to [2016-11-29 13:07:31]...Done. EVOA_8300-1> gluster volume profile c_glusterfs info 161129-13:07:32 10.67.29.150 16.0y CPP_MOM-CPP-LSV203-gen2_gen2_COMPLETE stopfile=/tmp/15640 $ gluster volume profile c_glusterfs info Brick: 10.32.0.48:/opt/lvmdir/c2/brick -------------------------------------- Cumulative Stats: Block Size: 1b+ 4b+ 8b+ No. of Reads: 0 1 0 No. of Writes: 6 2 9 Block Size: 16b+ 32b+ 64b+ No. of Reads: 0 6 6 No. of Writes: 2 9 6 Block Size: 128b+ 256b+ 512b+ No. of Reads: 1 2 7 No. of Writes: 22 26 67 Block Size: 1024b+ 2048b+ 4096b+ No. of Reads: 14 3 5 No. of Writes: 79 129 35 Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 3 14 12 No. of Writes: 13 0 1 Block Size: 65536b+ 131072b+ No. of Reads: 16 224 No. of Writes: 20 16 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2 FORGET 0.00 0.00 us 0.00 us 0.00 us 248 RELEASE 0.00 0.00 us 0.00 us 0.00 us 189505 RELEASEDIR 0.00 70.33 us 63.00 us 74.00 us 3 STATFS 0.00 172.50 us 90.00 us 255.00 us 2 READDIR 0.00 92.67 us 46.00 us 128.00 us 6 GETXATTR 6.29 192.50 us 83.00 us 4065.00 us 4443 SETXATTR 7.19 88.90 us 3.00 us 1724.00 us 10996 OPENDIR 14.39 210.96 us 33.00 us 45438.00 us 9282 INODELK 72.13 278.64 us 44.00 us 1452.00 us 35214 LOOKUP Duration: 7146 seconds Data Read: 32013955 bytes Data Written: 4931237 bytes Interval 2 Stats: Block Size: 8b+ 512b+ 1024b+ No. of Reads: 0 0 0 No. of Writes: 1 5 5 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 0 0 No. of Writes: 5 6 5 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 1 RELEASE 0.00 0.00 us 0.00 us 0.00 us 15632 RELEASEDIR 0.01 172.50 us 90.00 us 255.00 us 2 READDIR 0.01 97.00 us 46.00 us 128.00 us 4 GETXATTR 6.38 192.98 us 85.00 us 460.00 us 1879 SETXATTR 7.46 88.90 us 3.00 us 629.00 us 4767 OPENDIR 7.82 114.99 us 33.00 us 16289.00 us 3865 INODELK 78.32 278.35 us 44.00 us 1452.00 us 15988 LOOKUP Duration: 541 seconds Data Read: 0 bytes Data Written: 103460 bytes Brick: 10.32.1.144:/opt/lvmdir/c2/brick --------------------------------------- Cumulative Stats: Block Size: 1b+ 4b+ 8b+ No. of Reads: 0 2 1 No. of Writes: 12 2 9 Block Size: 16b+ 32b+ 64b+ No. of Reads: 0 10 10 No. of Writes: 4 18 34 Block Size: 128b+ 256b+ 512b+ No. of Reads: 1 2 5 No. of Writes: 47 66 90 Block Size: 1024b+ 2048b+ 4096b+ No. of Reads: 13 5 15 No. of Writes: 93 131 36 Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 28 47 100 No. of Writes: 15 1 1 Block Size: 65536b+ 131072b+ No. of Reads: 431 881 No. of Writes: 2 7 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 8 FORGET 0.00 0.00 us 0.00 us 0.00 us 766 RELEASE 0.00 0.00 us 0.00 us 0.00 us 249890 RELEASEDIR 0.00 80.67 us 59.00 us 99.00 us 3 STATFS 0.00 153.50 us 95.00 us 235.00 us 4 GETXATTR 0.12 608.38 us 124.00 us 82081.00 us 4446 SETXATTR 0.20 459.28 us 42.00 us 102079.00 us 9334 INODELK 0.85 1692.49 us 51.00 us 91491.00 us 10996 OPENDIR 0.95 571.52 us 24.00 us 84614.00 us 36224 STAT 1.32 815.53 us 87.00 us 87059.00 us 35231 LOOKUP 96.57 58347.51 us 62.00 us 246355.00 us 36156 READDIRP Duration: 8695 seconds Data Read: 162964890 bytes Data Written: 2025544 bytes Interval 2 Stats: Block Size: 8b+ 512b+ 1024b+ No. of Reads: 0 0 0 No. of Writes: 1 5 5 Block Size: 2048b+ 4096b+ 8192b+ No. of Reads: 0 0 0 No. of Writes: 5 6 5 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 1 RELEASE 0.00 0.00 us 0.00 us 0.00 us 15632 RELEASEDIR 0.00 200.00 us 165.00 us 235.00 us 2 GETXATTR 0.12 659.87 us 124.00 us 82081.00 us 1882 SETXATTR 0.19 501.62 us 48.00 us 80022.00 us 3885 INODELK 0.82 1769.25 us 52.00 us 64336.00 us 4769 OPENDIR 1.19 583.49 us 26.00 us 84614.00 us 20858 STAT 1.31 839.80 us 87.00 us 87059.00 us 16005 LOOKUP 96.37 61324.40 us 74.00 us 241437.00 us 16125 READDIRP Duration: 541 seconds Data Read: 0 bytes Data Written: 103460 bytes $ EVOA_8300-1> EVOA_8300-1> # wait for a minute EVOA_8300-1> wait 60 Waiting from [2016-11-29 13:07:33] to [2016-11-29 13:08:33]...Done. EVOA_8300-1> gluster volume profile c_glusterfs info 161129-13:08:35 10.67.29.150 16.0y CPP_MOM-CPP-LSV203-gen2_gen2_COMPLETE stopfile=/tmp/15640 $ gluster volume profile c_glusterfs info Brick: 10.32.0.48:/opt/lvmdir/c2/brick -------------------------------------- Cumulative Stats: Block Size: 1b+ 4b+ 8b+ No. of Reads: 0 1 0 No. of Writes: 6 2 9 Block Size: 16b+ 32b+ 64b+ No. of Reads: 0 6 6 No. of Writes: 2 9 6 Block Size: 128b+ 256b+ 512b+ No. of Reads: 1 2 7 No. of Writes: 22 26 67 Block Size: 1024b+ 2048b+ 4096b+ No. of Reads: 14 3 5 No. of Writes: 79 129 35 Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 3 14 12 No. of Writes: 13 0 1 Block Size: 65536b+ 131072b+ No. of Reads: 16 224 No. of Writes: 20 16 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 2 FORGET 0.00 0.00 us 0.00 us 0.00 us 248 RELEASE 0.00 0.00 us 0.00 us 0.00 us 191343 RELEASEDIR 0.00 70.33 us 63.00 us 74.00 us 3 STATFS 0.00 172.50 us 90.00 us 255.00 us 2 READDIR 0.00 92.67 us 46.00 us 128.00 us 6 GETXATTR 6.04 192.80 us 83.00 us 4065.00 us 4703 SETXATTR 7.60 88.92 us 3.00 us 1724.00 us 12830 OPENDIR 14.10 215.83 us 33.00 us 48369.00 us 9809 INODELK 72.26 279.27 us 44.00 us 1452.00 us 38848 LOOKUP Duration: 7208 seconds Data Read: 32013955 bytes Data Written: 4931237 bytes Interval 3 Stats: %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 1838 RELEASEDIR 3.65 198.02 us 84.00 us 466.00 us 260 SETXATTR 11.27 301.67 us 46.00 us 48369.00 us 527 INODELK 11.57 89.04 us 49.00 us 660.00 us 1834 OPENDIR 73.51 285.37 us 200.00 us 640.00 us 3634 LOOKUP Duration: 62 seconds Data Read: 0 bytes Data Written: 0 bytes Brick: 10.32.1.144:/opt/lvmdir/c2/brick --------------------------------------- Cumulative Stats: Block Size: 1b+ 4b+ 8b+ No. of Reads: 0 2 1 No. of Writes: 12 2 9 Block Size: 16b+ 32b+ 64b+ No. of Reads: 0 10 10 No. of Writes: 4 18 34 Block Size: 128b+ 256b+ 512b+ No. of Reads: 1 2 5 No. of Writes: 47 66 90 Block Size: 1024b+ 2048b+ 4096b+ No. of Reads: 13 5 15 No. of Writes: 93 131 36 Block Size: 8192b+ 16384b+ 32768b+ No. of Reads: 28 47 100 No. of Writes: 15 1 1 Block Size: 65536b+ 131072b+ No. of Reads: 431 881 No. of Writes: 2 7 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 8 FORGET 0.00 0.00 us 0.00 us 0.00 us 766 RELEASE 0.00 0.00 us 0.00 us 0.00 us 251728 RELEASEDIR 0.00 80.67 us 59.00 us 99.00 us 3 STATFS 0.00 153.50 us 95.00 us 235.00 us 4 GETXATTR 0.12 628.59 us 112.00 us 82081.00 us 4706 SETXATTR 0.18 469.51 us 42.00 us 102079.00 us 9864 INODELK 0.84 1667.88 us 49.00 us 91491.00 us 12830 OPENDIR 0.89 580.14 us 24.00 us 84614.00 us 39055 STAT 1.25 823.75 us 87.00 us 87059.00 us 38866 LOOKUP 96.72 58491.54 us 62.00 us 248778.00 us 42190 READDIRP Duration: 8757 seconds Data Read: 162964890 bytes Data Written: 2025544 bytes Interval 3 Stats: %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 1838 RELEASEDIR 0.07 974.16 us 112.00 us 81951.00 us 260 SETXATTR 0.09 649.60 us 53.00 us 59116.00 us 530 INODELK 0.53 690.48 us 30.00 us 77783.00 us 2831 STAT 0.76 1520.32 us 49.00 us 60490.00 us 1834 OPENDIR 0.90 903.37 us 220.00 us 52687.00 us 3635 LOOKUP 97.65 59354.60 us 162.00 us 248778.00 us 6034 READDIRP Duration: 62 seconds Data Read: 0 bytes Data Written: 0 bytes $ EVOA_8300-1> EVOA_8300-1> gluster volume profile c_glusterfs stop 161129-13:08:36 10.67.29.150 16.0y CPP_MOM-CPP-LSV203-gen2_gen2_COMPLETE stopfile=/tmp/15640 $ gluster volume profile c_glusterfs stop Stopping volume profile on c_glusterfs has been successful $ EVOA_8300-1> EVOA_8300-1> gluster volume c_glusterfs top 161129-13:08:37 10.67.29.150 16.0y CPP_MOM-CPP-LSV203-gen2_gen2_COMPLETE stopfile=/tmp/15640 $ gluster volume c_glusterfs top unrecognized word: c_glusterfs (position 1) $ EVOA_8300-1> gluster volume top c_glusterfs read 161129-13:10:14 10.67.29.150 16.0y CPP_MOM-CPP-LSV203-gen2_gen2_COMPLETE stopfile=/tmp/15640 $ gluster volume top c_glusterfs read Brick: 10.32.0.48:/opt/lvmdir/c2/brick Count filename ======================= 42 /logfiles/availability/CELLO_AVAILABILITY2_LOG.xml 8 /logfiles/alarm_event/ALARM_LOG.xml 5 /logfiles/alarm_event/EVENT_LOG.xml 4 /configuration/oamrdncy.cfg 3 /security/usrmgmt_data/shadow 3 /security/usrmgmt_data/gshadow 3 /security/usrmgmt_data/group 3 /security/usrmgmt_data/passwd 2 /loadmodules_norepl/CXC1724447%25_R13Z/cello/emas/CPPClientConfig.xml 2 /logfiles/security/CELLO_SECURITYEVENT_LOG.xml 2 /logfiles/audit_trail/CORBA_AUDITTRAIL_LOG.xml 2 /systemfiles/cello/ss7/segments 2 /license/parameters_01.lic 1 /loadmodules_norepl/CXC1724447%25_R13Z/cello/emas/CPPClientConfig.xml.tmp 1 /systemfiles/cello/osa/oei 1 /license/licenseKeyInfo.lic Brick: 10.32.1.144:/opt/lvmdir/c2/brick Count filename ======================= 127 <gfid:11924eca-0be7-4998-a784-f5e8c3efae3e> 91 <gfid:f807fd10-816c-471f-aba4-af4334ca4a04> 43 <gfid:8bdb94b6-aaf2-4cc0-974a-74124a5f9f97> 5 <gfid:8629e4b6-3291-4ef7-b731-c79ede4d261c> 4 /logfiles/systemlog/syslog 2 <gfid:7a108660-4fb7-47bf-b182-8361bcf1b544> 2 <gfid:23806fe8-1c01-4ec7-837a-55af95998fe3> 2 <gfid:5ea755d2-8c2a-4bb6-9ef2-89d254ebfa22> 2 <gfid:cf5acaaf-d3e4-442d-99d3-c790b167f762> 2 <gfid:0ef8d31f-5b73-4e1c-a4a2-de0f4213e9af> 2 <gfid:65384e0c-9ea6-4ed2-a51d-64f11ef22d3a> 2 <gfid:a148ee86-e321-4afd-a00c-70407917358f> 2 <gfid:0c03ea31-a961-4c2d-b471-7209d91f1cf9> 2 <gfid:d9a0cba5-e555-4533-bce5-4752838053b1> 2 <gfid:d5eb913b-4b5c-4b34-acd6-6e294cd14b82> 2 <gfid:d7e0add1-a01a-4710-a863-2e1b26e56e67> 2 <gfid:547cc4a9-399b-4b56-9c7c-fce47a430854> 2 <gfid:17293ca0-0779-4b72-94fb-5de85d127bed> 2 <gfid:adb643a5-8c95-4cb6-bb2e-0bfee8c74d8d> 2 <gfid:f29481ca-7ba7-4b7c-ad1d-61cbd52c36b9> 2 <gfid:bc6e0010-c9b2-45fa-bbc0-b82cdad56139> 2 <gfid:e8bf7ca1-888f-4b2e-88c8-b0467fc46e4b> 2 <gfid:22eee50d-2aee-40c7-a764-9c4e29bed522> 2 <gfid:691fc8dc-5945-4f96-a044-16addbe6e924> 2 <gfid:a87ab5e1-6f59-42f3-96bf-641fec842c7b> 2 /loadmodules_norepl/CXC1724447%25_R13Z/cello/emas/CPPClientConfig.xml.tmp 2 <gfid:70422951-de13-4111-8563-62f0af3a24fd> 2 /systemfiles/cello/ss7/segments 2 <gfid:2f2ab403-5497-45c7-b377-d4b093c758bd> 2 /loadmodules_norepl/CXC1723372_R94B01 2 /loadmodules_norepl/CXC1720772_R86A01 2 /configuration/oamrdncy.cfg 1 /pmd/96/002500/pmd-ospi_sccadm-ppc-2166-20161129-100851.tgz 1 <gfid:0f456e93-14c5-4ab9-9b4c-3125d28425fc> 1 <gfid:6599b7e8-adeb-45f2-9dc4-0d373598fe95> 1 <gfid:ec9dab77-e44f-4785-b878-bea8bf8c53c8> 1 /pmd/pmd.data 1 <gfid:9e814c7c-6ac4-40c0-a9de-5e65042f0d78> 1 <gfid:848ee4b0-d84f-4fba-99a3-49cc0bd90dc3> 1 /loadmodules_norepl/CXC1720773_R86A01 1 /loadmodules_norepl/CXC1723373_R93A01 1 <gfid:cf442ff0-fdfc-48b1-87ef-d71364d10835> 1 <gfid:7ae1e5da-90b2-4d70-9e80-6115c7f91d6a> 1 <gfid:30151495-2dbd-445e-b1cb-c7dd66a9cdf2> 1 <gfid:4a5105b8-82f5-464c-9737-5bae7b052492> 1 <gfid:a7d3a075-e064-40b4-8ecb-49e0aae43614> 1 <gfid:b67d631b-36a2-46e5-a587-462c891f0177> $ EVOA_8300-1> EVOA_8300-1> gluster volume info 161129-13:10:55 10.67.29.150 16.0y CPP_MOM-CPP-LSV203-gen2_gen2_COMPLETE stopfile=/tmp/15640 $ gluster volume info Volume Name: c_glusterfs Type: Replicate Volume ID: 560ca2c3-6d79-45ce-bca1-50ec9a2f25ac Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.32.0.48:/opt/lvmdir/c2/brick Brick2: 10.32.1.144:/opt/lvmdir/c2/brick Options Reconfigured: nfs.disable: on network.ping-timeout: 4 performance.readdir-ahead: on $ EVOA_8300-1> Log close: 161129-131102 - /home/emamiko/EVO8300/Issues/CPU_highLoad_C1MP/Gluster_vol_profile.txt
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel