I'm honestly not trying to resurrect some gcc 2.96 flame war or anything here, but I'm not a very seasoned C programmer, and I've run across an inconsistancy between Red Hat's version of gcc, and gcc 2.95.4 on a Debian system. Consider the following uninspired, pointless piece of code: #include <stdio.h> float toot(int, float); main() { int a = 4; float b = 5; float result = 0; result = toot(a, b); printf("%f\n", result); } float toot(int x, float y) { if (y == 20) { return y; } else { toot(x, x*y); } } Compiled with Red Hat's gcc 2.96, I get "nan" (however, If I take out the recursive call, and just return x*y, I get 20.000000). Compiled with Debian's 2.95.4, I get 20.000000. Can anybody explain to me (a) why, and (b) if there's something inherently wrong about what I'm doing that would cause this to fail on a Red Hat system? I know the code is pointless, but it's an extremely dumbed down version of a more complex problem exhibiting the exact same behavior. Thanks in advance for any insights that can be provided. Like I said, I'm not trying to restart an old flame war, or anything -- I'm just a newbie to C, who is honestly curious about what is going on under the covers to cause the inconsistency. --Chris. _______________________________________________ Redhat-devel-list mailing list Redhat-devel-list@redhat.com https://listman.redhat.com/mailman/listinfo/redhat-devel-list