> > On 3/18/2021 10:29 PM, Uladzislau Rezki wrote: > > On Thu, Mar 18, 2021 at 03:38:25PM +0530, vjitta@xxxxxxxxxxxxxx wrote: > >> From: Vijayanand Jitta <vjitta@xxxxxxxxxxxxxx> > >> > >> A potential use after free can occur in _vm_unmap_aliases > >> where an already freed vmap_area could be accessed, Consider > >> the following scenario: > >> > >> Process 1 Process 2 > >> > >> __vm_unmap_aliases __vm_unmap_aliases > >> purge_fragmented_blocks_allcpus rcu_read_lock() > >> rcu_read_lock() > >> list_del_rcu(&vb->free_list) > >> list_for_each_entry_rcu(vb .. ) > >> __purge_vmap_area_lazy > >> kmem_cache_free(va) > >> va_start = vb->va->va_start > > Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()? > > > > -- > > Vlad Rezki > > > > Thanks for suggestion. > > I see free_vmap_area_lock (spinlock) is taken in __purge_vmap_area_lazy > while it loops through list and calls kmem_cache_free on va's. So, looks > like we can't replace it with kfree_rcu as it might cause scheduling > within atomic context. > A double argument of the kfree_rcu() is a safe way to be used from atomic contexts, it does not use any sleeping primitives, so it can be replaced. >From the other hand i see that per-cpu KVA allocator is only one user of the RCU and your change fixes it. Feel free to use: Reviewed-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> Thanks. -- Vlad Rezki