On Mon, Jul 17, 2017 at 4:29 PM, Michal Hocko <mhocko@xxxxxxxxxx> wrote: > On Mon 17-07-17 15:27:31, Zhaoyang Huang wrote: >> From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxxxxxx:wq> >> >> It is no need to find the very beginning of the area within >> alloc_vmap_area, which can be done by judging each node during the process >> >> For current approach, the worst case is that the starting node which be found >> for searching the 'vmap_area_list' is close to the 'vstart', while the final >> available one is round to the tail(especially for the left branch). >> This commit have the list searching start at the first available node, which >> will save the time of walking the rb tree'(1)' and walking the list(2). >> >> vmap_area_root >> / \ >> tmp_next U >> / (1) >> tmp >> / >> ... >> / >> first(current approach) >> >> vmap_area_list->...->first->...->tmp->tmp_next >> (2) > > This still doesn't answer questions posted for your previous version > http://lkml.kernel.org/r/20170717070024.GC7397@xxxxxxxxxxxxxx > > Please note that is really important to describe _why_ the patch is > needed. What has changed can be easily read in the diff... > I did some test on an ARM64 platform and found that there is no great help nor regression for vmalloc. By more investigation, I find that the vmalloc area for 64bit arch is too huge to reach the end of the vmap_free_list, which have the new allocated area just grow up(seems no chance to use the rb tree). I will try to find a 32bit platform for more test. >> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxxxxxx> >> --- >> mm/vmalloc.c | 7 +++++++ >> 1 file changed, 7 insertions(+) >> >> diff --git a/mm/vmalloc.c b/mm/vmalloc.c >> index 34a1c3e..f833e07 100644 >> --- a/mm/vmalloc.c >> +++ b/mm/vmalloc.c >> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, >> >> while (n) { >> struct vmap_area *tmp; >> + struct vmap_area *tmp_next; >> tmp = rb_entry(n, struct vmap_area, rb_node); >> + tmp_next = list_next_entry(tmp, list); >> if (tmp->va_end >= addr) { >> first = tmp; >> + if (ALIGN(tmp->va_end, align) + size >> + < tmp_next->va_start) { >> + addr = ALIGN(tmp->va_end, align); >> + goto found; >> + } >> if (tmp->va_start <= addr) >> break; >> n = n->rb_left; >> -- >> 1.9.1 >> >> -- >> To unsubscribe, send a message with 'unsubscribe linux-mm' in >> the body to majordomo@xxxxxxxxx. For more info on Linux MM, >> see: http://www.linux-mm.org/ . >> Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> > > -- > Michal Hocko > SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>