Re: [PATCH for-next v4 05/13] RDMA/rxe: Replace RB tree by xarray for indexes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 03, 2021 at 12:02:34AM -0500, Bob Pearson wrote:
> @@ -342,8 +229,18 @@ void *rxe_alloc_locked(struct rxe_pool *pool)
>  	elem->obj = obj;
>  	kref_init(&elem->ref_cnt);
>  
> +	if (pool->flags & RXE_POOL_INDEX) {
> +		err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem,
> +					 pool->xarray.limit,
> +					 &pool->xarray.next, GFP_ATOMIC);

This uses the _bh lock

>  void rxe_elem_release(struct kref *kref)
> @@ -397,6 +315,9 @@ void rxe_elem_release(struct kref *kref)
>  	struct rxe_pool *pool = elem->pool;
>  	void *obj;
>  
> +	if (pool->flags & RXE_POOL_INDEX)
> +		xa_erase(&pool->xarray.xa, elem->index);

But this doesn't ?

Shouldn't they all be the same?

And why is it a bh lock anyhow? Can you add some comments saying which
APIs here are called from the softirq?

> +void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
>  {
>  	struct rxe_pool_elem *elem;
>  	void *obj;
>  
> +	elem = xa_load(&pool->xarray.xa, index);
> +	if (elem) {
>  		kref_get(&elem->ref_cnt);
>  		obj = elem->obj;
>  	} else {

And why doesn't this use after free elem? This pattern is only safe
when using RCU or when holding the spinlock across the
kref_get.. Since you can't use rcu with the core allocated objects it
seems this needs a xa_lock?

Jason



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux