On Thu, 29 May 2008 22:03:14 -0700 (PDT) Christoph Lameter <clameter@xxxxxxx> wrote: > On Thu, 29 May 2008, Andrew Morton wrote: > > > All seems reasonable to me. The obvious question is "how do we size > > the arena". We either waste memory or, much worse, run out. > > The per cpu memory use by subsystems is typically quite small. We already > have an 8k limitation for percpu space for modules. And that does not seem > to be a problem. eh? That's DEFINE_PERCPU memory, not alloc_pecpu() memory? > > And running out is a real possibility, I think. Most people will only > > mount a handful of XFS filesystems. But some customer will come along > > who wants to mount 5,000, and distributors will need to cater for that, > > but how can they? > > Typically these are fairly small 8 bytes * 5000 is only 20k. It was just an example. There will be others. tcp_v4_md5_do_add ->tcp_alloc_md5sig_pool ->__tcp_alloc_md5sig_pool does an alloc_percpu for each md5-capable TCP connection. I think - it doesn't matter really, because something _could_. And if something _does_, we're screwed. > > I wonder if we can arrange for the default to be overridden via a > > kernel boot option? > > We could do that yes. Phew. > > Another obvious question is "how much of a problem will we have with > > internal fragmentation"? This might be a drop-dead showstopper. > > But then per cpu data is not frequently allocated and freed. I think it is, in the TCP case. And that's the only one I looked at. Plus who knows what lies ahead of us? > Going away from allocpercpu saves a lot of memory. We could make this > 128k or so to be safe? ("alloc_percpu" - please be careful about getting this stuff right) I don't think there is presently any upper limit on alloc_percpu()? It uses kmalloc() and kmalloc_node()? Even if there is some limit, is it an unfixable one? -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html