On 05/04/2012 05:30 PM, Josh Triplett wrote:
On Fri, May 04, 2012 at 02:36:40PM +0200, Konrad Eisele wrote:
Take the 2 files b.c and a.h.
vvvvvv b.c vvvvv
#define d1
#include "a.h"
struct s0 { int x; };
int main(int a, char **b) {
struct s0 v;
d2(m);
};
^^^^^^ b.c ^^^^^^
vvvvvv a.h vvvvv
#ifdef d2
#define m v
#else
#define m n
#endif
#ifdef d1
#define d2(a) while(a.x) { }
#endif
^^^^^^ a.h ^^^^^^
Now use sparse and you get:
$./sparse b.c
b.c:6:3: error: cannot dereference this type
The error was that you forgot in b.c:
+#define d2
#define d1
...
When you have a dependency tree what you can printout is:
$./sparse b.c
b.c:6:3: error: cannot dereference this type
+macro expansion of d2 defined in a.h:8
+ defined because of #ifdef d1 in a.h:7
+ dependent of d1 defined at b.c:1
+> argument 0 expansion at b.c:6
+ macro expansion m defined in a.h:4
+ defined because of else of #ifdef d2
+ dependend of d2 (not defined)
That looks wildly useful to me. I'd love to see that information
available to Sparse somehow, as long as it doesn't significantly impact
the performance of the common case (namely, running sparse on code that
has no warnings or errors).
One idea: could you check the impact of your patch on a Linux kernel
build (with defconfig)? Try building the kernel with sparse (make C=2),
with and without your patch, and measure the total time. If your patch
has negligible impact on build time, and it doesn't require changing
every other line of Sparse due to interface changes, it should prove
reasonable.
make C=2:
original sparse:
real 17m54.997s
user 15m25.181s
sys 2m11.281s
decpp-sparse from "git clone git://git.code.sf.net/p/decpp/code decpp "
real 18m29.748s
user 16m18.155s
sys 2m13.221s
But decpp is not written with performance in common cases in mind.
The 2 runs probably also depend on other factors too.
I cant think that 4 bytes extra for each token can have a big impact,
if I would implement it that way (it is not in decpp).
The other key point: much like Linux, Sparse doesn't normally accept
patches that add a new interface without a patch adding the
corresponding code that uses that interface. Having an implementation
helps ensure that the design of an interface fits its intended purpose.
For instance, if you could create a simple example of the kind of output
you showed above (even just saying in a warning message "expanded from
macro foo"), perhaps modeled after LLVM's clang error messages, and
include that in a second patch depending on the first, then that
two-patch sequence would have a much better chance of getting in.
I understand. Actually the code to demonstrate is
git://git.code.sf.net/p/decpp/code , then do a
$make
$./shrinkc t1.c
That is kind of the goal.
And - it does require some internal structure
change. You dont get this kind of functionality
for free. You have to be invasive, isnt this
something that is obvious?. And in my view, it
can come with penalty. The preprocessing stage
is not something that should be neglected all
the time as if not existent. You struggle
with Macros half of the time you program.
-- Konrad
Hope that helps,
Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-sparse" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html