--- Angelo Leto <angleto@xxxxxxxxx> wrote: > I'm working on applications which are data critical, > so when I change > a library on the system there is the risk that > results may be > different, so I create a repository with the > critical libraries, and I > upgrade the libraries on repository only when it is > needed and > independently from the system libraries (I do this > in order to upgrade > the productivity tools and their related libraries > without interacting > with the libraries linked by my application). > Obviously when I change > the compiler I obtain different results on my > applications, so my idea > is to create a "development package" which includes > my critical > libraries and also the compiler in order to obtain > the same result > (always using the same optimizations flags) on my > application also > when I'm compiling on different Linux installations. This would make me nervous. If you program gives different results if you use different tool chains, that suggests to me that either your program is broken or the results you're obtaining are affected by bugs in the libraries you're using. You're half right. If your program uses library X, and that library has a subtle bug in the function you're using, then the result you get using a different library will be different. The fix is not to ensure that you use the same library all the time, but to ensure your test suite is sufficiently well developed that you can detect such a bug, and use a different function (even if you have to write it yourself) that routinely gives you provably correct answers. To illustrate, I generally work with number crunching related to risk assessment. My programs had better give me identical results regardless of whether I use gcc or MS Visual C++ or Intel's compiler, or whatever other tool might be tried, and on whatever platform. I have written code to do numeric integration, compute the eigenstructure of general matrices, &c. In each case, there are well defined mathematical properties that must be true of the result, and I construct a test suite that, for example, will apply my eigensystem calculation code to tens of millions of random general square matrices (random values and random size of matrix), and test the result. My code, then, is provably correct if it consistently provides mathematically correct results, and these results will be the same regardless of the platform and tool chain used because the mathematics of the problem do not depend on these things. Even if you're dealing with numerically unstable systems (such as a dynamic system that produces chaos), it ought to give identical results for identical input. Something is wrong if it doesn't, and the fix isn't to ensure the program is executed always with binaries created from the same toolchain. It is to figure out precisely why so you can fix the program. Whether the bug is in my program or in a library I am using, if I do not take corrective action, my program remains buggy, and I have yet to see a situation where a program that is correct gives different results when compiled using different tools. I am sorry to say that if one has to resort to the practices you describe to ensure the same results by ensuring the same libraries are used, then I would not consider trusting the program at all. Rather, use of such practices suggests QA code for the program is inadequate to ensure correct results. I certainly would not tolerate a situation where I get different trajectories from a numeric integration, or a different eigensystem from a given matrix, simply because I used a different library to compile the program. If such a situation arose, then one of the versions, if not both, is giving mathematically incorrect results! HTH Ted