Gluster memory useage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello All,

 

I’m having some trouble with the memory allocation of my glusterfsd processes. The server keeps running out of memory, no matter how much I add (it’s a vm) and I just don’t understand how gluster allocates memory to its processes.

 

The situation is as follows: rhel 6.4, 16 GB ram, gluster 3.4.2-1

I have one big lun of 20 TB that contains all my bricks. I started with 4 GB memory, but saw it was getting full, so I stepped it up to 8GB and later 16. The weird thing is VMware claims only 50% of the memory is actively used so I guess this is all due to caching?

I tried the flushcaches thing, but I don’t see to free up much.

 

If I check my memory consumption I see this:

 

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

26800 root      20   0 2593m 1.9g 1088 S 17.3 12.5   5261:43 glusterfsd

26914 root      20   0 2541m 1.9g 1072 S 15.9 12.2   1081:04 glusterfsd

33299 root      20   0 2938m 1.9g 1056 S 15.9 12.1   1787:40 glusterfsd

26872 root      20   0 2093m 1.3g 1080 S 15.6  8.6   3391:02 glusterfsd

26995 root      20   0 1828m 1.3g 1072 S  8.0  8.1 782:29.85 glusterfsd

26934 root      20   0 1891m 1.0g  376 S  0.0  6.6 795:44.19 glusterfsd

37651 root      20   0 1309m 1.0g  332 S  0.0  6.6   0:03.52 glusterfs

27015 root      20   0 1374m 780m  388 S  0.0  4.9 580:29.19 glusterfsd

29712 root      20   0  989m 504m 1076 S 20.2  3.2 227:47.87 glusterfsd

8635 root      20   0 2415m 186m  392 S  0.0  1.2 594:02.47 glusterfsd

31859 root      20   0  643m 174m  392 S  0.0  1.1  99:06.60 glusterfsd

27308 root      20   0  440m 169m 1296 S 10.0  1.1 646:30.02 glusterfs

27458 root      20   0  415m 149m 1280 S 12.3  0.9 781:36.04 glusterfs

27358 root      20   0  420m 143m 1272 S 10.3  0.9 553:04.97 glusterfs

7453 root      20   0  609m 133m  872 S  0.0  0.8  27:49.83 glusterfsd

27508 root      20   0  410m 117m 1272 S  9.6  0.7 560:42.51 glusterfs

8608 root      20   0 2418m 111m  372 S  0.0  0.7 785:53.31 glusterfsd

27408 root      20   0  373m  96m 1272 S  9.6  0.6 431:32.28 glusterfs

27558 root      20   0  368m  87m  344 S  0.0  0.6 318:37.92 glusterfs

40715 root      20   0  203m  86m 1036 S  0.3  0.5   3:41.78 puppetd

30940 root      20   0  674m  78m  960 S  0.0  0.5  40:18.17 glusterfsd

19363 root      20   0 1437m  74m  728 S  0.0  0.5  60:12.02 glusterfsd

8773 root      20   0  685m  71m  956 S  0.0  0.4  19:40.05 glusterfsd

  789 root      20   0  625m  67m  900 S  0.0  0.4  22:45.64 glusterfsd

1599 root      20   0 1291m  41m  956 S  0.0  0.3  13:03.23 glusterfsd

27808 root      20   0  482m  36m  344 S  0.0  0.2  42:40.67 glusterfs

47625 root      20   0  677m  31m  900 S  0.0  0.2   4:09.29 glusterfsd

30539 root      20   0 1381m  27m  852 S  0.0  0.2  20:31.90 glusterfsd

Etc…

 

Right now I’m at:

             total       used       free     shared    buffers     cached

Mem:      16326684   16176692     149992          0      14284      59520

-/+ buffers/cache:   16102888     223796

Swap:      4194296    4194252         44

 

… the lack of free swap worries me a lot

 

Is it normal for some volumes to consume that much memory? How does gluster decide how much memory it should allocate? It there any way to influence this?

From time to time bricks switch themselves offline. Is this due to the lack of free memory?

 

 

Any help to shed some light on this situation would be most welcome as I have no idea where to start…..

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux