On 1/10/20 2:34 PM, Toke Høiland-Jørgensen wrote: > Jesper Dangaard Brouer <brouer@xxxxxxxxxx> writes: > >> On Fri, 10 Jan 2020 15:22:02 +0100 >> Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote: >> >>> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h >>> index 2741aa35bec6..1b2bc2a7522e 100644 >>> --- a/include/linux/netdevice.h >>> +++ b/include/linux/netdevice.h >> [...] >>> @@ -1993,6 +1994,8 @@ struct net_device { >>> spinlock_t tx_global_lock; >>> int watchdog_timeo; >>> >>> + struct xdp_dev_bulk_queue __percpu *xdp_bulkq; >>> + >>> #ifdef CONFIG_XPS >>> struct xps_dev_maps __rcu *xps_cpus_map; >>> struct xps_dev_maps __rcu *xps_rxqs_map; >> >> We need to check that the cache-line for this location in struct >> net_device is not getting updated (write operation) from different CPUs. >> >> The test you ran was a single queue single CPU test, which will not >> show any regression for that case. > > Well, pahole says: > > /* --- cacheline 14 boundary (896 bytes) --- */ > struct netdev_queue * _tx __attribute__((__aligned__(64))); /* 896 8 */ > unsigned int num_tx_queues; /* 904 4 */ > unsigned int real_num_tx_queues; /* 908 4 */ > struct Qdisc * qdisc; /* 912 8 */ > struct hlist_head qdisc_hash[16]; /* 920 128 */ > /* --- cacheline 16 boundary (1024 bytes) was 24 bytes ago --- */ > unsigned int tx_queue_len; /* 1048 4 */ > spinlock_t tx_global_lock; /* 1052 4 */ > int watchdog_timeo; /* 1056 4 */ > > /* XXX 4 bytes hole, try to pack */ > > struct xdp_dev_bulk_queue * xdp_bulkq; /* 1064 8 */ > struct xps_dev_maps * xps_cpus_map; /* 1072 8 */ > struct xps_dev_maps * xps_rxqs_map; /* 1080 8 */ > /* --- cacheline 17 boundary (1088 bytes) --- */ > > > of those, tx_queue_len is the max queue len (so only set on init), > tx_global_lock is not used by multi-queue devices, watchdog_timeo also > seems to be a static value thats set on init, and the xps* pointers also > only seems to be set once on init. So I think we're fine? > > I can run a multi-CPU test just to be sure, but I really don't see which > of those fields might be updated on TX... > Note that another interesting field is miniq_egress, your patch moves it to another cache line. We probably should move qdisc_hash array elsewhere.