On Mon, Aug 10, 2015 at 09:56:46AM +0200, Patrick McHardy wrote: > On 06.08, Pablo Neira Ayuso wrote: > > On Wed, Aug 05, 2015 at 11:09:16AM +0200, Patrick McHardy wrote: [...] > > > > - preparation phase - > > > > delete table y > > > > create table y > > > > create set x > > > > - commit phase - > > > > send NEWGEN, attribute type: begin > > > > delete table y > > > > create table y > > > > create set x > > > > send NEWGEN, attribute type: end > > > > > > > > Thanks for your feedback! > > > > > > That might work if the message ordering is then guaranteed. However I think > > > we can fix this case without changing NEWGEN. Let me think about that a bit, > > > for now just taking care of the genid checks correctly seems like a good > > > step forward. > > > > But we can catch this problem through ->res_id, OK? > > I guess we could with a unique res_id per object, but how would this work > with multiple object types? Any change bumps res_id, so we'd invalidate > the full dump for any change. I see, if we want to be able to invalidate caches at per-object level, then I think we have to recover the idea of having a netlink attribute for the per-object generation counter. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html