On Tue, Apr 12, 2016 at 01:50:59PM +0900, js1304@xxxxxxxxx wrote: > From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > > It can be reused on other place, so factor out it. Following patch will > use it. > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > --- > mm/slab.c | 68 ++++++++++++++++++++++++++++++++++++--------------------------- > 1 file changed, 39 insertions(+), 29 deletions(-) > > diff --git a/mm/slab.c b/mm/slab.c > index 5451929..49af685 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -841,6 +841,40 @@ static inline gfp_t gfp_exact_node(gfp_t flags) > } > #endif > > +static int init_cache_node(struct kmem_cache *cachep, int node, gfp_t gfp) > +{ > + struct kmem_cache_node *n; > + > + /* > + * Set up the kmem_cache_node for cpu before we can > + * begin anything. Make sure some other cpu on this > + * node has not already allocated this > + */ > + n = get_node(cachep, node); > + if (n) > + return 0; > + > + n = kmalloc_node(sizeof(struct kmem_cache_node), gfp, node); > + if (!n) > + return -ENOMEM; > + > + kmem_cache_node_init(n); > + n->next_reap = jiffies + REAPTIMEOUT_NODE + > + ((unsigned long)cachep) % REAPTIMEOUT_NODE; > + > + n->free_limit = > + (1 + nr_cpus_node(node)) * cachep->batchcount + cachep->num; > + > + /* > + * The kmem_cache_nodes don't come and go as CPUs > + * come and go. slab_mutex is sufficient > + * protection here. > + */ > + cachep->node[node] = n; > + > + return 0; > +} > + Hello, Andrew. Could you apply following fix for this patch to mmotm? Thanks. ------>8----------- Date: Thu, 14 Apr 2016 10:28:11 +0900 Subject: [PATCH] mm/slab: fix bug n->free_limit is once set in boot-up process without enabling multiple cpu so it could be very low value. If we don't re-set when another cpu is up, it will stay too low. Fix it. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> --- mm/slab.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/mm/slab.c b/mm/slab.c index 13e74aa..59dd94a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -856,8 +856,14 @@ static int init_cache_node(struct kmem_cache *cachep, int node, gfp_t gfp) * node has not already allocated this */ n = get_node(cachep, node); - if (n) + if (n) { + spin_lock_irq(&n->list_lock); + n->free_limit = (1 + nr_cpus_node(node)) * cachep->batchcount + + cachep->num; + spin_unlock_irq(&n->list_lock); + return 0; + } n = kmalloc_node(sizeof(struct kmem_cache_node), gfp, node); if (!n) -- 1.9.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>