Gnu C++ & Open MP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I notice some strange result when I use Gnu OpenMP, I have an existing C++ serial code which has some time consuming for loops where some calculations are made on a large arrays of doubles,(12 arrays of length 1E6 for example) and also some of these arrays are copied to create a vector field(also an array of doubles). This particular for loop is where the code takes 90% of its time and typically can take 1 or 2 minutes here. What I notice is that when I run open MP its actually slower than the serial code! Not by much but still slower. Here I use the command 'time' to compare. Also I notice by watching top, that in both cases only 1 core is taxed, while in the OpenMP build memory usage doubles(actually a second core goes up from 0% to about 10%). I have a quad core system(dual cpu dual core, and 4G of ram), so this result is totally unexpected. In my code I explicitly set the number of threads to 4. I was expecting to see all 4 cores running at 80-90%. I have verified that 4 threads are being launched by runing the OpenMP build in GDB. What is going on here? how can I figure what the problem is?
Thanks Burlen




[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux