Jan Engelhardt schreef: > On Sunday 2010-03-28 12:07, Peter Gordon wrote: > > >> I need to add a number of rules to the ebtables and I cannot afford the >> fork overhead for each line. >> > > The larger part of the overhead is due to the tables recomputed (dumped, > rule added, restored) every time you call it. That's why one should use > xx-restore. > > >> So what I want to do is to read each line >> > >from a file and have the program iterate over the file. > >> ebtables-save and ebtables-restore is not good enough for my >> application, because I can't add rules incrementally. >> > > Dump the rules with ebtables-save to a buffer, add your rules, > and use -restore. That's sort of incrementally, and the fastest > way to put a ruleset in atomic fashion into place. > That's indeed the easiest way to do it. The only problem is that the counters of already existing rules won't always be correct because packets will traverse the chains between the -save and -restore time. It's probably not so hard to alter ebtables-restore to keep the counters correct. A few years ago I experimented with an ebtables 'daemon' that would run in the background and commands could be sent to it through a pipe. This has the advantage that the distinction between added and already existing rules is maintained, enabling correct counters. The changed table is updated in userspace and only committed to the kernel when a specific command is given. See ebtablesd.c and ebtablesu.c for the code. Performance tests are available in examples/perf_test/perf_test. I'm not maintaining these files for free, though. cheers, Bart -- Bart De Schuymer www.artinalgorithms.be -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html