This little number makes all of the flow dissection functions take raw input data pointer as const (1-5) and shuffles the branches in __skb_header_pointer() according to their hit probability. The result is +20 Mbps per flow/core with one Flow Dissector pass per packet. This affects RPS (with software hashing), drivers that use eth_get_headlen() on their Rx path and so on. Alexander Lobakin (6): flow_dissector: constify bpf_flow_dissector's data pointers skbuff: make __skb_header_pointer()'s data argument const flow_dissector: constify raw input @data argument linux/etherdevice.h: misc trailing whitespace cleanup ethernet: constify eth_get_headlen()'s @data argument skbuff: micro-optimize {,__}skb_header_pointer() include/linux/etherdevice.h | 4 ++-- include/linux/skbuff.h | 26 +++++++++++------------ include/net/flow_dissector.h | 6 +++--- net/core/flow_dissector.c | 41 +++++++++++++++++++----------------- net/ethernet/eth.c | 2 +- 5 files changed, 40 insertions(+), 39 deletions(-) -- 2.30.2