Dear List, If this isn't the right place to ask, please advise. I've come across different program execution times based on the gcc version used to compile C code. I understand that this is expected between compiler generations, but I expected to see better results with newer compiler generations. In fact, the results were worse. The program/shared libraries involved are - coded in C - compiled as version 1: gcc 3.3.3 on SLES 9 as version 2: gcc 4.1.2 on SLES 10 SP1 - gcc flags used for both compilation were identical: gcc -c -DLINUX -D_GNU_SOURCE -fPIC -funsigned-char -m32 -g The application is single threaded, CPU bound. The utilization of one CPU core is in both cases near or on 100%. The averaged test run times were - 4m07s for the program compiled with gcc 3.3.3 - 5m24s for the program compiled with gcc 4.1.2 The test runs were performed on the same system, a 8 CPU AMD Opteron with 2.6 GHz each, SLES 10 SP1. The program does pattern matching and format generation, nothing fancy. No fp calculations or similar math stuff is involved. Using optimization flags is not possible due to the fact that large parts of the application failed in -O test runs and would need to be fixed/tested extensively first. Profiling with -pg/gprof has not been produced usable results as shared libraries are not being tracked. Is there any explanation for this or am I missing something? Regards, Paul Moore