On Tue, Jul 11, 2017 at 12:32:48AM +0200, Eric Leblond wrote: > > Hi, > > Here's a small patchset fixing some memory leaks in nftables. Most > of them have been found using ASAN. Series applied, thanks Eric. > There is still a problem in memory handling due to the max_errors > system that stack errors to avoid an exit on first error. The > consequence is that the bison parser is loosing track of its > internal stacks and can not call the destructors when there > is an error in the command. Probably we need explicit object tracking via list insertion, then rewind and release them? Would that be possible? I would expect this triggers a large patchset to do this right. > If we do set max_errors to 1: > > diff --git a/src/main.c b/src/main.c > index 7fbf00a..183bd0e 100644 > --- a/src/main.c > +++ b/src/main.c > @@ -29,7 +29,7 @@ > #include <cli.h> > > static struct nft_ctx nft; > -unsigned int max_errors = 10; > +unsigned int max_errors = 1; > #ifdef DEBUG > unsigned int debug_level; > #endif > > Then there is no more memory leak in case of an invalid command > but we loose the display of multiple errors. > > A possibleway to fix that would be to be able to set max_errors > via a configuration function. It would be set to 1 by default. > So users of libnftables will not experiment memleak but we > could keep the same behavior in nft by setting it to 10 > explicetely. I would prefer we find a way to fix this without adding this limitation. Let me know, thanks! -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html