On 12/19/22 at 09:09am, Lorenzo Stoakes wrote: > On Mon, Dec 19, 2022 at 04:01:00PM +0800, Baoquan He wrote: > > On 12/17/22 at 11:44am, Lorenzo Stoakes wrote: > > > On Sat, Dec 17, 2022 at 09:54:30AM +0800, Baoquan He wrote: > > > > @@ -2229,8 +2236,12 @@ void vm_unmap_ram(const void *mem, unsigned int count) > > > > return; > > > > } > > > > > > > > - va = find_vmap_area(addr); > > > > + spin_lock(&vmap_area_lock); > > > > + va = __find_vmap_area((unsigned long)addr, &vmap_area_root); > > > > BUG_ON(!va); > > > > + if (va) > > > > + va->flags &= ~VMAP_RAM; > > > > + spin_unlock(&vmap_area_lock); > > > > debug_check_no_locks_freed((void *)va->va_start, > > > > (va->va_end - va->va_start)); > > > > free_unmap_vmap_area(va); > > > > > > Would it be better to perform the BUG_ON() after the lock is released? You > > > already check if va exists before unmasking so it's safe. > > > > It's a little unclear to me why we care BUG_ON() is performed before or > > after the lock released. We won't have a stable kernel after BUG_ON()(), > > right? > > BUG_ON()'s can be recoverable in user context and it would be a very simple > change that would not fundamentally alter anything to simply place the added > lines prior to the BUG_ON(). > > The code as-is doesn't really make sense anyway, you BUG_ON(!va) then check if > va is non-null, then immediately the function afterwards passes va around as if > it were not null, so I think it'd also be an aesthetic and logical improvement > :) In fact, I should not do the checking, but do the clearing anyway. If I change it as below, does it look better to you? diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 5e578563784a..369b848d097a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2253,8 +2253,7 @@ void vm_unmap_ram(const void *mem, unsigned int count) spin_lock(&vmap_area_lock); va = __find_vmap_area((unsigned long)addr, &vmap_area_root); BUG_ON(!va); - if (va) - va->flags &= ~VMAP_RAM; + va->flags &= ~VMAP_RAM; spin_unlock(&vmap_area_lock); debug_check_no_locks_freed((void *)va->va_start, (va->va_end - va->va_start)); > > > > > > > Also, do we want to clear VMAP_BLOCK here? > > > > I do, but I don't find a good place to clear VMAP_BLOCK. > > > > In v1, I tried to clear it in free_vmap_area_noflush() as below, > > Uladzislau dislikes it. So I remove it. My thinking is when we unmap and > > free the vmap area, the vmap_area is moved from vmap_area_root into > > &free_vmap_area_root. When we allocate a new vmap_area via > > alloc_vmap_area(), we will allocate a new va by kmem_cache_alloc_node(), > > the va->flags must be 0. Seems not initializing it to 0 won't impact > > thing. > > > > You are at this point clearing the VMAP_RAM flag though, so if it is unimportant > what the flags are after this call, why are you clearing this one? With my understanding, We had better do the clearing. Currently, from the code, not doing the clearing won't cause issue. If possible, I would like to clear it as below draft code. > > It is just a little confusing, I wonder whether the VMAP_BLOCK flag is necessary > at all, is it possible to just treat a non-VMAP_BLOCK VMAP_RAM area as if it > were simply a fully occupied block? Do we gain much by the distinction? Yeah, VMAP_BLOCK flag is necessary. vmap_block contains used region, or dirty/free regions. While the non-vmap_blcok vm_map_ram area is similar with the non vm_map_ram area. When reading out vm_map_ram regions, vmap_block regions need be treated differently. > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index 5d3fd3e6fe09..d6f376060d83 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -1815,6 +1815,7 @@ static void free_vmap_area_noflush(struct vmap_area *va) > > > > spin_lock(&vmap_area_lock); > > unlink_va(va, &vmap_area_root); > > + va->flags = 0; > > spin_unlock(&vmap_area_lock); > > > > nr_lazy = atomic_long_add_return((va->va_end - va->va_start) >> > > > > > > > >