Wed, May 15, 2019 at 11:13:26AM CEST, pablo@xxxxxxxxxxxxx wrote: >On Wed, May 15, 2019 at 01:03:31AM +0200, Pablo Neira Ayuso wrote: >> On Tue, May 14, 2019 at 07:01:08PM +0200, Jiri Pirko wrote: >> > Thu, May 09, 2019 at 06:39:51PM CEST, pablo@xxxxxxxxxxxxx wrote: >> > >This patch adds hardware offload support for nftables through the >> > >existing netdev_ops->ndo_setup_tc() interface, the TC_SETUP_CLSFLOWER >> > >classifier and the flow rule API. This hardware offload support is >> > >available for the NFPROTO_NETDEV family and the ingress hook. >> > > >> > >Each nftables expression has a new ->offload interface, that is used to >> > >populate the flow rule object that is attached to the transaction >> > >object. >> > > >> > >There is a new per-table NFT_TABLE_F_HW flag, that is set on to offload >> > >an entire table, including all of its chains. >> > > >> > >This patch supports for basic metadata (layer 3 and 4 protocol numbers), >> > >5-tuple payload matching and the accept/drop actions; this also includes >> > >basechain hardware offload only. >> > > >> > >Signed-off-by: Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx> >> > >> > [...] >> > >> > >+static int nft_flow_offload_chain(struct nft_trans *trans, >> > >+ enum flow_block_command cmd) >> > >+{ >> > >+ struct nft_chain *chain = trans->ctx.chain; >> > >+ struct netlink_ext_ack extack = {}; >> > >+ struct flow_block_offload bo = {}; >> > >+ struct nft_base_chain *basechain; >> > >+ struct net_device *dev; >> > >+ int err; >> > >+ >> > >+ if (!nft_is_base_chain(chain)) >> > >+ return -EOPNOTSUPP; >> > >+ >> > >+ basechain = nft_base_chain(chain); >> > >+ dev = basechain->ops.dev; >> > >+ if (!dev) >> > >+ return -EOPNOTSUPP; >> > >+ >> > >+ bo.command = cmd; >> > >+ bo.binder_type = TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS; >> > >+ bo.block_index = (u32)trans->ctx.chain->handle; >> > >+ bo.extack = &extack; >> > >+ INIT_LIST_HEAD(&bo.cb_list); >> > >+ >> > >+ err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo); >> > >> > Okay, so you pretend to be clsact-ingress-flower. That looks fine. >> > But how do you ensure that the real one does not bind a block on the >> > same device too? >> >> I could store the interface index in the block_cb object, then use the >> tuple [ cb, cb_ident, ifindex ] to check if the block is already bound >> by when flow_block_cb_alloc() is called. > >Actually cb_ident would be sufficient. One possibility would be to That is what I wrote :) >extend flow_block_cb_alloc() to check for an existing binding. > >diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c >index cf984ef05609..44172014cebe 100644 >--- a/net/core/flow_offload.c >+++ b/net/core/flow_offload.c >@@ -193,9 +193,15 @@ struct flow_block_cb *flow_block_cb_alloc(u32 block_index, tc_setup_cb_t *cb, > { > struct flow_block_cb *block_cb; > >+ list_for_each_entry(block_cb, &flow_block_cb_list, list) { >+ if (block_cb->cb == cb && >+ block_cb->cb_ident == cb_ident) >+ return ERR_PTR(-EBUSY); >+ } >+ > block_cb = kzalloc(sizeof(*block_cb), GFP_KERNEL); > if (!block_cb) >- return NULL; >+ return ERR_PTR(-ENOMEM); > > block_cb->cb = cb; > block_cb->cb_ident = cb_ident; > >Thanks.