Re: [PATCH nf-next] netfilter: nft_set_rbtree: use seqcount to avoid lock in most cases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 26, 2017 at 02:09:41AM +0200, Florian Westphal wrote:
[...]
> @@ -144,7 +159,9 @@ static int nft_rbtree_insert(const struct net *net, const struct nft_set *set,
>  	int err;
>  
>  	write_lock_bh(&priv->lock);
> +	write_seqcount_begin(&priv->count);
>  	err = __nft_rbtree_insert(net, set, rbe, ext);
> +	write_seqcount_end(&priv->count);
>  	write_unlock_bh(&priv->lock);
>  
>  	return err;
> @@ -158,7 +175,9 @@ static void nft_rbtree_remove(const struct net *net,
>  	struct nft_rbtree_elem *rbe = elem->priv;
>  
>  	write_lock_bh(&priv->lock);

Do we need the spinlock anymore? This is protected by mutex from
userspace, and we have no support for neither timeouts nor dynamic set
population from packet path yet.

> +	write_seqcount_begin(&priv->count);
>  	rb_erase(&rbe->node, &priv->root);
> +	write_seqcount_end(&priv->count);
>  	write_unlock_bh(&priv->lock);
>  }
>  
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux