> On Wed, Apr 20, 2011 at 4:23 AM, KOSAKI Motohiro > <kosaki.motohiro@xxxxxxxxxxxxxx> wrote: > > I'm worry about this patch. A lot of mm code assume !NUMA systems > > only have node 0. Not only SLUB. > > So is that a valid assumption or not? Christoph seems to think it is > and James seems to think it's not. Which way should we aim to fix it? > Would be nice if other people chimed in as we already know what James > and Christoph think. I'm sorry. I don't know it really. The fact was gone into historical myst. ;-) Now, CONFIG_NUMA has mainly five meanings. 1) system may has !0 node id. 2) compile mm/mempolicy.c (ie enable mempolicy APIs) 3) Allocator (kmalloc, vmalloc, alloc_page, et al) awake NUMA topology. 4) enable zone-reclaim feature 5) scheduler makes per-node load balancing scheduler domain Anyway, we have to fix this issue. I'm digging which fixing way has least risk. btw, x86 don't have an issue. Probably it's a reason why this issue was neglected long time. arch/x86/Kconfig ------------------------------------- config ARCH_DISCONTIGMEM_ENABLE def_bool y depends on NUMA && X86_32 -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html