Re: Gnu C++ & Open MP
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Tom St Denis wrote:
burlen wrote:
Hi,
I notice some strange result when I use Gnu OpenMP, I have an
existing C++ serial code which has some time consuming for loops
where some calculations are made on a large arrays of doubles,(12
arrays of length 1E6 for example) and also some of these arrays are
copied to create a vector field(also an array of doubles). This
particular for loop is where the code takes 90% of its time and
typically can take 1 or 2 minutes here. What I notice is that when I
run open MP its actually slower than the serial code! Not by much but
still slower. Here I use the command 'time' to compare. Also I notice
by watching top, that in both cases only 1 core is taxed, while in
the OpenMP build memory usage doubles(actually a second core goes up
from 0% to about 10%). I have a quad core system(dual cpu dual core,
and 4G of ram), so this result is totally unexpected. In my code I
explicitly set the number of threads to 4. I was expecting to see all
4 cores running at 80-90%. I have verified that 4 threads are being
launched by runing the OpenMP build in GDB. What is going on here?
how can I figure what the problem is?
Thanks Burlen
What's the affinity of the threads? IIRC by default at least with
pthreads they attach to the CPU that issued the thread so you have to
set the affinity to 0xF or -1 [all].
Tom
Thanks for the tip, google'ing I found there is, or rather will be an
environment variable, GOMP_CPU_AFFINITY, in a later release(its not in
4.2) I have little desire to configure and build gcc, so I am wondering
is there an alternate approach? Can I extrenal to OpenMP, control how
threads are mapped to cpus?
[Index of Archives]
[Linux C Programming]
[Linux Kernel]
[eCos]
[Fedora Development]
[Fedora Announce]
[Autoconf]
[The DWARVES Debugging Tools]
[Yosemite Campsites]
[Yosemite News]
[Linux GCC]