On Mon, Mar 14, 2022 at 2:09 AM Huang, Ying <ying.huang@xxxxxxxxx> wrote: > > Hi, Yu, > > Yu Zhao <yuzhao@xxxxxxxxxx> writes: > > diff --git a/mm/Kconfig b/mm/Kconfig > > index 3326ee3903f3..747ab1690bcf 100644 > > --- a/mm/Kconfig > > +++ b/mm/Kconfig > > @@ -892,6 +892,16 @@ config ANON_VMA_NAME > > area from being merged with adjacent virtual memory areas due to the > > difference in their name. > > > > +# the multi-gen LRU { > > +config LRU_GEN > > + bool "Multi-Gen LRU" > > + depends on MMU > > + # the following options can use up the spare bits in page flags > > + depends on !MAXSMP && (64BIT || !SPARSEMEM || SPARSEMEM_VMEMMAP) > > LRU_GEN depends on !MAXSMP. So, What is the maximum NR_CPUS supported > by LRU_GEN? LRU_GEN doesn't really care about NR_CPUS. IOW, it doesn't impose a max number. The dependency is with NODES_SHIFT selected by MAXSMP: default "10" if MAXSMP This combined with LAST_CPUPID_SHIFT can exhaust the spare bits in page flags. MAXSMP is meant for kernel developers to test their code, and it should not be used in production [1]. But some distros unfortunately ship kernels built with this option, e.g., Fedora and Ubuntu. And their users reported build errors to me after they applied MGLRU on those kernels ("Not enough bits in page flags"). Let me add Fedora and Ubuntu to this thread. Fedora and Ubuntu, Could you please clarify if there is a reason to ship kernels built with MAXSMP? Otherwise, please consider disabling this option. Thanks. As per above, MAXSMP enables ridiculously large numbers of CPUs and NUMA nodes for testing purposes. It is detrimental to performance, e.g., CPUMASK_OFFSTACK. [1] https://lore.kernel.org/lkml/20131106055634.GA24044@xxxxxxxxx/