In current code, cached_hole_size is set to the maximal value if the unmapped vma is under free_area_cache, next search will search from the base addr Actually, we can keep cached_hole_size so that if next required size is more that cached_hole_size, it can search from free_area_cache Signed-off-by: Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> --- mm/mmap.c | 4 +--- 1 files changed, 1 insertions(+), 3 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 3f758c7..970f572 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1423,10 +1423,8 @@ void arch_unmap_area(struct mm_struct *mm, unsigned long addr) /* * Is this a new hole at the lowest possible address? */ - if (addr >= TASK_UNMAPPED_BASE && addr < mm->free_area_cache) { + if (addr >= TASK_UNMAPPED_BASE && addr < mm->free_area_cache) mm->free_area_cache = addr; - mm->cached_hole_size = ~0UL; - } } /* -- 1.7.7.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>