On 16.01, Pablo Neira Ayuso wrote: > On Fri, Jan 16, 2015 at 07:35:57PM +0000, Patrick McHardy wrote: > > On 16.01, Thomas Graf wrote: > > > On 01/16/15 at 06:36pm, Patrick McHardy wrote: > > > > > > > > Well, we do have a problem with interrupted dumps. As you know once > > > > the netlink message buffer is full, we return to userspace and > > > > continue dumping during the next read. Expanding obviously changes > > > > the order since we rehash from bucket N to N and 2N, so this will > > > > indeed cause duplicate (doesn't matter) and missed entries. > > > > > > Right,but that's a Netlink dump issue and not specific to rhashtable. > > > > Well, rhashtable (or generally resizing) will make it a lot worse. > > Usually we at worst miss entries which were added during the dump, > > which is made up by the notifications. > > > > With resizing we might miss anything, its completely undeterministic. > > > > > Putting the sequence number check in place should be sufficient > > > for sets, right? > > > > I don't see how. The problem is that the ordering of the hash changes > > and it will skip different entries than those that have already been > > dumped. > > I think the generation counter should catch up this sort of problems. > The resizing is triggered by a new/deletion element, which bumps it > once the transaction is handled. I don't think so, it tracks only two generations, we can have an arbitrary amount of changes while performing a dump. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html