Re: [PATCH 00 of 41] Transparent Hugepage Support #17

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mon, 5 Apr 2010, Pekka Enberg wrote:
> 
> Unfortunately I wasn't able to find a pony on Google but here are some
> huge page numbers if you're interested:

You missed the point.

Those numbers weren't done with the patches in question. They weren't done 
with the magic new code that can handle fragmentation and swapping. They 
are simply not relevant to any of the complex code under discussion.

The thing you posted is already doable (and done) using the existing hacky 
(but at least unsurprising) preallocation crud. We know that works. That's 
never been the issue.

What I'm asking for is this thing called "Does it actually work in 
REALITY". That's my point about "not just after a clean boot".

Just to really hit the issue home, here's my current machine:

	[root@i5 ~]# free
	             total       used       free     shared    buffers     cached
	Mem:       8073864    1808488    6265376          0      75480    1018412
	-/+ buffers/cache:     714596    7359268
	Swap:     10207228      12848   10194380

Look, I have absolutely _sh*tloads_ of memory, and I'm not using it. 
Really. I've got 8GB in that machine, it's just not been doing much more 
than a few "git pull"s and "make allyesconfig" runs to check the current 
kernel and so it's got over 6GB free. 

So I'm bound to have _tons_ of 2M pages, no?

No. Lookie here:

	[344492.280001] DMA: 1*4kB 1*8kB 1*16kB 2*32kB 2*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15836kB
	[344492.280020] DMA32: 17516*4kB 19497*8kB 18318*16kB 15195*32kB 10332*64kB 5163*128kB 1371*256kB 123*512kB 2*1024kB 1*2048kB 0*4096kB = 2745528kB
	[344492.280027] Normal: 57295*4kB 66959*8kB 39639*16kB 29486*32kB 10483*64kB 2366*128kB 398*256kB 100*512kB 27*1024kB 3*2048kB 0*4096kB = 3503268kB

just to help you parse that: this is a _lightly_ loaded machine. It's been 
up for about four days. And look at it.

In case you can't read it, the relevant part is this part:

	DMA: .. 1*2048kB 3*4096kB
	DMA32: .. 1*2048kB 0*4096kB
	Normal: .. 3*2048kB 0*4096kB

there is just a _small handful_ of 2MB pages. Seriously. On a machine with 
8 GB of RAM, and three quarters of it free, and there is just a couple of 
contiguous 2MB regions. Note, that's _MB_, not GB.

And don't tell me that these things are easy to fix. Don't tell me that 
the current VM is quite clean and can be harmlessly extended to deal with 
this all. Just don't. Not when we currently have a totally unexplained 
regression in the VM from the last scalability thing we did.

		Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]