Re: slab-nomerge (was Re: [git pull] device mapper changes for 4.3)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On (09/04/15 07:11), Linus Torvalds wrote:
> >
> > But I went through the corresponding slabinfo (I track slabinfo too); and yes,
> > zero unused objects.
> 
> Ahh. I should have realized - the number you are actually tracking is
> meaningless. The "unused objects" thing is not really tracked well.
> 
> /proc/slabinfo ends up not showing the percpu queue state, so things
> look "used" when they are really just on the percpu queues for that
> slab.So the "unused" number you are tracking is not really meaningful,
> and the zeroes you are seeing is just a symptom of that: slabinfo
> isn't "exact" enough.
> 
> So you should probably do the statistics on something that is more
> meaningful: the actual number of pages that have been allocated (which
> would be numslabs times pages-per-slab).


Aha... Didn't know that, sorry.

Christoph Lameter wrote:
> Please use the slabinfo tool. What you see in /proc/slabinfo is generated
> for slab compatibility and may not show useful numbers.
> 

OK. I did another round of tests

 git clone git://sourceware.org/git/glibc.git
 make -j8
 package (xz)
 rm -fr glibc



>From slabinfo -T output

Slabcaches :  91      Aliases  : 118->69  Active:  65
Memory used:  60.0M   # Loss   :  13.2M   MRatio:    28%
# Objects  : 162.4K   # PartObj:  10.6K   ORatio:     6%

Per Cache    Average         Min         Max       Total
---------------------------------------------------------
#Objects        2.4K          11       19.0K      162.4K
#Slabs           108           1        1.8K        7.0K
#PartSlab         34           0        1.6K        2.2K
%PartSlab         7%          0%         86%         31%
PartObjs           6           0        4.7K       10.6K
% PartObj         3%          0%         33%          6%
Memory        923.9K        8.1K       10.2M       60.0M
Used          720.3K        8.0K        9.7M       46.8M
Loss          203.6K           0        6.1M       13.2M

Per Object   Average         Min         Max
---------------------------------------------
Memory           290           8        8.1K
User             288           8        8.1K
Loss               1           0          64


I took the
       "Memory used:  60.0M   # Loss   :  13.2M   MRatio:    28%"
line and generated 3 graphs:
-- "Memory used"	MM
-- "Loss"		LOSS
-- "MRatio"		RATION

for "slab_nomerge = 0" and "slab_nomerge = 1".

... And those are sort of interesting. I was expecting to see more
diverged behaviours.

Attached.

Please let me know if you want to see files with the numbers
(slabinfo -T only).

	-ss

Attachment: glibc-RATIO-merge_vs_nomerge.png
Description: PNG image

Attachment: glibc-LOSS-merge_vs_nomerge.png
Description: PNG image

Attachment: glibc-MM-merge_vs_nomerge.png
Description: PNG image


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]