Re: [PATCH v10 1/5] kasan: support backing vmalloc space with real shadow memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Daniel

>  
> @@ -1294,14 +1299,19 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
>  	spin_lock(&free_vmap_area_lock);
>  	llist_for_each_entry_safe(va, n_va, valist, purge_list) {
>  		unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT;
> +		unsigned long orig_start = va->va_start;
> +		unsigned long orig_end = va->va_end;
>  
>  		/*
>  		 * Finally insert or merge lazily-freed area. It is
>  		 * detached and there is no need to "unlink" it from
>  		 * anything.
>  		 */
> -		merge_or_add_vmap_area(va,
> -			&free_vmap_area_root, &free_vmap_area_list);
> +		va = merge_or_add_vmap_area(va, &free_vmap_area_root,
> +					    &free_vmap_area_list);
> +
> +		kasan_release_vmalloc(orig_start, orig_end,
> +				      va->va_start, va->va_end);
>  
I have some questions here. I have not analyzed kasan_releace_vmalloc()
logic in detail, sorry for that if i miss something. __purge_vmap_area_lazy()
deals with big address space, so not only vmalloc addresses it frees here,
basically it can be any, starting from 1 until ULONG_MAX, whereas vmalloc
space spans from VMALLOC_START - VMALLOC_END:

1) Should it be checked that vmalloc only address is freed or you handle
it somewhere else?

if (is_vmalloc_addr(va->va_start))
    kasan_release_vmalloc(...)

2) Have you run any bencmarking just to see how much overhead it adds?
I am asking, because probably it make sense to add those figures to the
backlog(commit message). For example you can run:

<snip>
sudo ./test_vmalloc.sh performance
and
sudo ./test_vmalloc.sh sequential_test_order=1
<snip>

Thanks!

--
Vlad Rezki




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux