On Fri, Aug 17, 2018 at 09:25:48AM +0200, Norbert Manthey wrote: > Dear Dan, > > rather than fixing a single limit as in this case, I'd be interested in > adding another command line parameter that controls all the limits at > once, and would allow them to be increased based on users demand. This > could be useful for runs where I'm interested in more results and do not > care about run time (e.g. because I run over night or the weekend). > > Is there a simple way to spot all the used limits, or can you point the > relevant places so that I can adapt the tool appropriately? Thanks! No, there isn't really... There are a bunch of limits. Some of them are subtle. All of them come from real world testing on the kernel where it just didn't complete the build. One example of a subtle limit, is that if you're passing a list of values like parameter X can be "1,4,6,...", then when the string gets too long, Smatch just says ok it goes up to type_max. This is a kind of hacky way to deal with recursive calls, because otherwise if a function does: void frob(int x) { frob(x + 2); } Then the list of potential values of x just keeps getting longer. A bunch of the limits just solve recursion in one form or another. I don't think boosting the limits really gains you very much. One of the most common limits to hit is that "this function is taking too long to parse so let's stop calculating implications of how variables are related to each other". But this just means that you get more warnings and they're mostly false positives... What I would like to do is sort of say "parse all the function calls for this file as if they were inline". Say you have code like: out: free_whatever(foo); return ret; Maybe there are 5 "goto out;" lines, it would be useful to parse the free_whatever(foo); 5 times. Even if it took an hour to parse one file, that would be fine if you were really interested in that file. regards, dan carpenter