It is no need to find the very beginning of the area within alloc_vmap_area, which can be done by judging each node during the process free_vmap_cache miss: vmap_area_root / \ tmp_next U / (T1) tmp / ... (T2) / first vmap_area_list->first->......->tmp->tmp_next->...->vmap_area_list |-----(T3)----| Under the scenario of free_vmap_cache miss, total time consumption of finding the suitable hole is T = T1 + T2 + T3, while the commit decrease it to T1. In fact, 'vmalloc' always start from the fix address(VMALLOC_START),which will cause the 'first' to be close to the begining of the list(vmap_area_list) and make T3 to be big. The commit will especially help for a large and almost full vmalloc area. Whearas, it would NOT affect current quick approach such as free_vmap_cache, for it just take effect when free_vmap_cache miss and will reestablish it laterly. Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxxxxxx> --- mm/vmalloc.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 8698c1c..f58f445 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -471,9 +471,20 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, while (n) { struct vmap_area *tmp; + struct vmap_area *tmp_next; tmp = rb_entry(n, struct vmap_area, rb_node); + tmp_next = list_next_entry(tmp, list); if (tmp->va_end >= addr) { first = tmp; + if (ALIGN(tmp->va_end, align) + size + < tmp_next->va_start) { + /* + * free_vmap_cache miss now,don't + * update cached_hole_size here, + * as __free_vmap_area does + */ + goto found; + } if (tmp->va_start <= addr) break; n = n->rb_left; -- 1.9.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>