I do not have a way of tracing it. I meant to reply when I did, but that has not changed. That being said, I like this patch. On May 29, 2014, at 2:22 AM, Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> wrote: > Richard Yao reported a month ago that his system have a trouble > with vmap_area_lock contention during performance analysis > by /proc/meminfo. Andrew asked why his analysis checks /proc/meminfo > stressfully, but he didn't answer it. > > https://lkml.org/lkml/2014/4/10/416 > > Although I'm not sure that this is right usage or not, there is a solution > reducing vmap_area_lock contention with no side-effect. That is just > to use rcu list iterator in get_vmalloc_info(). This function only needs > values on vmap_area structure, so we don't need to grab a spinlock. > > Reported-by: Richard Yao <ryao@xxxxxxxxxx> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index f64632b..fdbb116 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2690,14 +2690,14 @@ void get_vmalloc_info(struct vmalloc_info *vmi) > > prev_end = VMALLOC_START; > > - spin_lock(&vmap_area_lock); > + rcu_read_lock(); > > if (list_empty(&vmap_area_list)) { > vmi->largest_chunk = VMALLOC_TOTAL; > goto out; > } > > - list_for_each_entry(va, &vmap_area_list, list) { > + list_for_each_entry_rcu(va, &vmap_area_list, list) { > unsigned long addr = va->va_start; > > /* > @@ -2724,7 +2724,7 @@ void get_vmalloc_info(struct vmalloc_info *vmi) > vmi->largest_chunk = VMALLOC_END - prev_end; > > out: > - spin_unlock(&vmap_area_lock); > + rcu_read_unlock(); > } > #endif > > -- > 1.7.9.5 > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href