On Wed, Jun 24, 2009 at 05:13:37PM -0700, H. Peter Anvin wrote: > Andi Kleen wrote: > > > > Haven't read the new patches, but per cpu data always was sized > > for all possible CPUs. > > > >> and N is large, what did it cost? > > > >> And what are reasonable values of N? > > > > N should normally not be large anymore, since num_possible_cpus() > > is sized based on firmware information now. > > > > *Ahem* virtual machines *ahem*... And? Even there's not that big typically. The traditional problem was just for 128 NR_CPUs kernel were nothing was sized based on machine capacity. Also on large systems the VMs shouldn't be sized for full capacity. > > I have discussed this with Tejun, and the plan is to allocate the percpu > information when a processor is first brought online (but not removed > when it is offlined again.) It's a real problem for 32-bit VMs, so it's > more important than you'd think. You have to rewrite all code that does for_each_possible_cpu (x) in initialization then to use callbacks. It would be a gigantic change all over the tree. -Andi -- ak@xxxxxxxxxxxxxxx -- Speaking for myself only. -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html