Re: [PATCH] vmalloc: Convert to XArray

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Again, nice - for the followup:

Reviewed-by: William Kucharski <william.kucharski@xxxxxxxxxx>

> On Jun 3, 2020, at 9:33 PM, Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
> 
> On Wed, Jun 03, 2020 at 11:35:24AM -0600, William Kucharski wrote:
>>> -	err = radix_tree_preload(gfp_mask);
>>> -	if (unlikely(err)) {
>>> +	err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
>>> +	if (err) {
>> 
>> Should the "(err)" here be "unlikely(err)" as the radix tree version was?
> 
> That's a good question.  In previous years, GCC used to be stupider and
> we had to help it out by annotating which paths were more or less likely
> to be taken.  Now it generally makes the right decision (eg understanding
> that an "early exit" which unwinds state from a function is likely to
> be an error case and thus the slow path), so we no longer need to mark
> nearly as much code with unlikely() as we used to.  Similar things are
> true for prefetch() calls.
> 
> I took a look at the disassembly of this code with and without the
> unlikely() and I also compared if (err) with if (err < 0).  In the end,
> it makes no difference to the control flow (all variants jump to the end
> of the function) although it changes the register allocation decisions
> a little (!)
> 
> What did make a difference was moving all the error handling to the
> end of the function; reduced the size of the function by 48 bytes.
> This is with gcc-9.3.  I can submit this patch as a follow-up since it's
> basically unrelated to the other change.
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 375bbb410a94..3d5b5c32c840 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1569,10 +1569,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> 	va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE,
> 					VMALLOC_START, VMALLOC_END,
> 					node, gfp_mask);
> -	if (IS_ERR(va)) {
> -		kfree(vb);
> -		return ERR_CAST(va);
> -	}
> +	if (IS_ERR(va))
> +		goto free_vb;
> 
> 	vaddr = vmap_block_vaddr(va->va_start, 0);
> 	spin_lock_init(&vb->lock);
> @@ -1587,11 +1585,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> 
> 	vb_idx = addr_to_vb_idx(va->va_start);
> 	err = xa_insert(&vmap_blocks, vb_idx, vb, gfp_mask);
> -	if (err) {
> -		kfree(vb);
> -		free_vmap_area(va);
> -		return ERR_PTR(err);
> -	}
> +	if (err < 0)
> +		goto free_va;
> 
> 	vbq = &get_cpu_var(vmap_block_queue);
> 	spin_lock(&vbq->lock);
> @@ -1600,6 +1595,13 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> 	put_cpu_var(vmap_block_queue);
> 
> 	return vaddr;
> +
> +free_va:
> +	free_vmap_area(va);
> +	va = ERR_PTR(err);
> +free_vb:
> +	kfree(vb);
> +	return ERR_CAST(va);
> }
> 
> static void free_vmap_block(struct vmap_block *vb)
> 
>> Nice change and simplifies the code quite a bit.
>> 
>> Reviewed-by: William Kucharski <william.kucharski@xxxxxxxxxx>
> 






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux