Patrick McHardy wrote: > This is a first attempt to replace some global locks by private > per conntrack locks. On 64 bit, it fits into a hole and doesn't > enlarge struct nf_conn. > > Wrt. to the event cache, we certainly don't want to take and release > the lock for every event. I was thinking about something like this: > > - add a new member to the event structure to hold undelivered events > (named "missed" below) > - cache events in the existing member as you're doing currently > - on delivery, do something like this: > > events = xchg(&e->cache, 0); > missed = e->missed; ^^^ I think that we need to take the lock since we read e->missed, I see this possible issue: CPU0 gets a copy of the missed events (without taking the lock) CPU1 has already delivered the missed events, it clears them CPU0 delivers missed events that were already delivered by CPU1. > ret = notify->fcn(events | missed, &item); > if (!success || missed) { > spin_lock_bh(&ct->lock); > if (!success) > e->missed |= events; > else > e->missed &= ~missed; > spin_unlock_bh(&ct->lock); > } > > so if we failed to deliver the events, we add them to the missed > events for the next delivery attempt. Once we've delivered the > missed events, we clear them from the cache. > > Now is that really better - I'm not sure myself :) The per-conntrack > locking would be an improvement though. What do you think? Indeed, I also think that the per-conntrack locking would be an improvement for the protocol helpers. wrt. the event cache, the missed field can save us from doing the locking in every event caching at the cost of consuming a bit more of memory. I think this is more conservative but safer than my approach (no potential defering by calling cmpxchg forever, even if it's unlikely). Still, we would need to take the spin lock for the event delivery. Let me know what you think. -- "Los honestos son inadaptados sociales" -- Les Luthiers -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html