On Wed, Mar 15, 2023 at 01:06:01PM +0100, Pablo Neira Ayuso wrote: > On Wed, Mar 15, 2023 at 12:45:33PM +0100, Sven Auhagen wrote: > > On Wed, Mar 15, 2023 at 12:39:46PM +0100, Pablo Neira Ayuso wrote: > > > Hi Sven, > > > > Hi Pablo, > > > > > > > > On Tue, Feb 28, 2023 at 11:14:13AM +0100, Sven Auhagen wrote: > > > > Add a counter per namespace so we know the total offloaded > > > > flows. > > > > > > Thanks for your patch. > > > > > > I would like to avoid this atomic operation in the packet path, it > > > should be possible to rewrite this with percpu counters. > > > > > > > Isn't it possible though that a flow is added and then removed > > on two different CPUs and I might end up with negative counters > > on one CPU? > > I mean, keep per cpu counters for addition and deletions. Then, when > dumping you could collected them and provide the number. > > We used to have these stats for conntrack in: > > /proc/net/stat/nf_conntrack > > but they were removed, see 'insert' and 'delete', they never get > updated anymore. conntrack is using atomic for this: cnet->count, but > it is required for the upper cap (maximum number of entries). I see that makes sense. Let me rework the patch to have per cpu insert and delete counters and send it in as v5. Thanks Sven > > > > But, you can achieve the same effect with: > > > > > > conntrack -L | grep OFFLOAD | wc -l > > > > > > > Yes, we are doing that right now but when we have like > > 10 Mio. conntrack entries this ends up beeing a very long > > and expensive operation to get the number of offloaded > > flows. It would be really beneficial to know it without > > going through all conntrack entries. > > > > > ? > > Yes, with such a large number of entries, conntrack -L is not > convenient.