Re: [PATCH net-next,v4 08/12] drivers: net: use flow block API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 14, 2019 at 05:17:20PM +0100, Edward Cree wrote:
> On 13/08/2019 20:51, Pablo Neira Ayuso wrote:
> > On Mon, Aug 12, 2019 at 06:50:09PM +0100, Edward Cree wrote:
> >> Pablo, can you explain (because this commit message doesn't) why these per-
> >>  driver lists are needed, and what the information/state is that has module
> >>  (rather than, say, netdevice) scope?
> >
> > The idea is to update drivers to support one flow_block per subsystem,
> > one for ethtool, one for tc, and so on. So far, existing drivers only
> > allow for binding one single flow_block to one of the existing
> > subsystems. So this limitation applies at driver level.
>
> That argues for per-driver _code_, not for per-driver _state_. For instance,
>  each driver could (more logically) store this information in the netdev
>  private data, rather than a static global.  Or even, since each driver
>  instance has a unique cb_ident = netdev_priv(net_dev), this doesn't need to
>  be local to the driver at all and could just belong to the device owning the
>  flow_block (which isn't necessarily the device doing the offload, per
>  indirect blocks).

This list could be placed in netdev_priv() area, yes.

> TBH I'm still not clear why you need a flow_block per subsystem, rather than
>  just having multiple subsystems feed their offload requests through the same
>  flow_block but with different enum tc_setup_type or enum tc_fl_command or
>  some other indication that this is "netfilter" rather than "tc" asking for a
>  tc_cls_flower_offload.

In tc, the flow_block is set up by when the ingress qdisc is
registered. The usual scenario for most drivers is to have one single
flow_block per registered ingress qdisc, this makes a 1:1 mapping
between ingress qdisc and flow_block.

Still, you can register two or more ingress qdiscs to make them share
the same policy via 'tc block'. In that case all those qdiscs use one
single flow_block. This makes a N:1 mapping between these qdisc
ingress and the flow_block. This policy applies to all ingress qdiscs
that are part of the same tc block. By 'tc block', I'm refering to the
tcf_block structure.

In netfilter, there are ingress basechains that are registered per
device. Each basechain gets a flow_block by when the basechain is
registered. Shared blocks as in tcf_block are not yet supported, but
it should not be hard to extend it to make this work.

To reuse the same flow_block as entry point for all subsystems as your
propose - assuming offloads for two or more subsystems are in place -
then all of them would need to have the same block sharing
configuration, which might not be the case, ie. tc ingress might have
a eth0 and eth1 use the same policy via flow_block, while netfilter
might have one basechain for eth0 and another for eth1 (no policy
sharing).

[...]
> This really needs a design document explaining what all the bits are, how
>  they fit together, and why they need to be like that.

I did not design this flow_block abstraction, this concept was already
in place under a different name and extend it so the ethtool/netfilter
subsystems to avoid driver code duplication for offloads. Having said
this, I agree documentation is good to have.



[Index of Archives]     [Netfitler Users]     [Berkeley Packet Filter]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux