Re: [PATCH v10 1/5] kasan: support backing vmalloc space with real shadow memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/29/19 7:20 AM, Daniel Axtens wrote:
> Hook into vmalloc and vmap, and dynamically allocate real shadow
> memory to back the mappings.
> 
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
> 
> Instead, share backing space across multiple mappings. Allocate a
> backing page when a mapping in vmalloc space uses a particular page of
> the shadow region. This page can be shared by other vmalloc mappings
> later on.
> 
> We hook in to the vmap infrastructure to lazily clean up unused shadow
> memory.
> 
> To avoid the difficulties around swapping mappings around, this code
> expects that the part of the shadow region that covers the vmalloc
> space will not be covered by the early shadow page, but will be left
> unmapped. This will require changes in arch-specific code.
> 
> This allows KASAN with VMAP_STACK, and may be helpful for architectures
> that do not have a separate module space (e.g. powerpc64, which I am
> currently working on). It also allows relaxing the module alignment
> back to PAGE_SIZE.
> 
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
> Acked-by: Vasily Gorbik <gor@xxxxxxxxxxxxx>
> Co-developed-by: Mark Rutland <mark.rutland@xxxxxxx>
> Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx> [shadow rework]
> Signed-off-by: Daniel Axtens <dja@xxxxxxxxxx>


Small nit bellow, otherwise looks fine:

Reviewed-by: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx>



>  static __always_inline bool
> @@ -1196,8 +1201,8 @@ static void free_vmap_area(struct vmap_area *va)
>  	 * Insert/Merge it back to the free tree/list.
>  	 */
>  	spin_lock(&free_vmap_area_lock);
> -	merge_or_add_vmap_area(va,
> -		&free_vmap_area_root, &free_vmap_area_list);
> +	(void)merge_or_add_vmap_area(va, &free_vmap_area_root,
> +				     &free_vmap_area_list);
>  	spin_unlock(&free_vmap_area_lock);
>  }
>  
..
>  
> @@ -3391,8 +3428,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
>  	 * and when pcpu_get_vm_areas() is success.
>  	 */
>  	while (area--) {
> -		merge_or_add_vmap_area(vas[area],
> -			&free_vmap_area_root, &free_vmap_area_list);
> +		(void)merge_or_add_vmap_area(vas[area], &free_vmap_area_root,

I don't think these (void) casts are necessary.

> +					     &free_vmap_area_list);
>  		vas[area] = NULL;
>  	}
>  
> 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux