Re: [Gluster-users] Memory leak in glusterfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nithya,

We are having the setup where copying the file to and deleting it from gluster mount point to update the latest file. We noticed due to this having some memory increase in glusterfsd process. 

To find the memory leak we are using valgrind but didn't get any help.

That's why contacted to glusterfs community.

Regards,
Abhishek

On Thu, Jun 6, 2019, 16:08 Nithya Balachandran <nbalacha@xxxxxxxxxx> wrote:
Hi Abhishek,

I am still not clear as to the purpose of the tests. Can you clarify why you are using valgrind and why you think there is a memory leak?

Regards,
Nithya

On Thu, 6 Jun 2019 at 12:09, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:
Hi Nithya,

Here is the Setup details and test which we are doing as below:


One client, two gluster Server.
The client is writing and deleting one file each 15 minutes by script test_v4.15.sh.

IP
Server side:
128.224.98.157 /gluster/gv0/
128.224.98.159 /gluster/gv0/

Client side:
128.224.98.160 /gluster_mount/

Server side:
gluster volume create gv0 replica 2 128.224.98.157:/gluster/gv0/ 128.224.98.159:/gluster/gv0/ force
gluster volume start gv0

root@128:/tmp/brick/gv0# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 7105a475-5929-4d60-ba23-be57445d97b5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 128.224.98.157:/gluster/gv0
Brick2: 128.224.98.159:/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

exec script: ./ps_mem.py -p 605 -w 61 > log
root@128:/# ./ps_mem.py -p 605
Private + Shared = RAM used Program
23668.0 KiB + 1188.0 KiB = 24856.0 KiB glusterfsd
---------------------------------
24856.0 KiB
=================================


Client side:
mount -t glusterfs -o acl -o resolve-gids 128.224.98.157:gv0 /gluster_mount


We are using the below script write and delete the file.

test_v4.15.sh

Also the below script to see the memory increase whihle the script is above script is running in background.

ps_mem.py

I am attaching the script files as well as the result got after testing the scenario.

On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran <nbalacha@xxxxxxxxxx> wrote:
Hi,

Writing to a volume should not affect glusterd. The stack you have shown in the valgrind looks like the memory used to initialise the structures glusterd uses and will free only when it is stopped.

Can you provide more details to what it is you are trying to test?

Regards,
Nithya


On Tue, 4 Jun 2019 at 15:41, ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:
Hi Team,

Please respond on the issue which I raised.

Regards,
Abhishek

On Fri, May 17, 2019 at 2:46 PM ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:
Anyone please reply....

On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx> wrote:
Hi Team,

I upload some valgrind logs from my gluster 5.4 setup. This is writing to the volume every 15 minutes. I stopped glusterd and then copy away the logs.  The test was running for some simulated days. They are zipped in valgrind-54.zip.

Lots of info in valgrind-2730.log. Lots of possibly lost bytes in glusterfs and even some definitely lost bytes.

==2737== 1,572,880 bytes in 1 blocks are possibly lost in loss record 391 of 391
==2737== at 0x4C29C25: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==2737== by 0xA22485E: ??? (in /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA217C94: ??? (in /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA21D9F8: ??? (in /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA21DED9: ??? (in /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA21E685: ??? (in /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA1B9D8C: init (in /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0x4E511CE: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==2737== by 0x4E8A2B8: ??? (in /usr/lib64/libglusterfs.so.0.0.1)
==2737== by 0x4E8AAB3: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==2737== by 0x409C35: glusterfs_process_volfp (in /usr/sbin/glusterfsd)
==2737== by 0x409D99: glusterfs_volumes_init (in /usr/sbin/glusterfsd)
==2737==
==2737== LEAK SUMMARY:
==2737== definitely lost: 1,053 bytes in 10 blocks
==2737== indirectly lost: 317 bytes in 3 blocks
==2737== possibly lost: 2,374,971 bytes in 524 blocks
==2737== still reachable: 53,277 bytes in 201 blocks
==2737== suppressed: 0 bytes in 0 blocks

--




Regards
Abhishek Paliwal


--




Regards
Abhishek Paliwal
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users


--




Regards
Abhishek Paliwal
_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux