On 2018-03-22 1:35 AM, Matt Turner wrote:
During a big compile (samba), top showed:
%Cpu0 : 80.9 us, 15.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 3.5 hi, 0.0 si, 0.0 st
%Cpu1 : 79.6 us, 18.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 2.4 hi, 0.0 si, 0.0 st
%Cpu2 : 81.4 us, 16.2 sy, 0.0 ni, 0.0 id, 0.0 wa, 2.4 hi, 0.0 si, 0.0 st
%Cpu3 : 79.1 us, 17.4 sy, 0.0 ni, 0.0 id, 0.0 wa, 3.5 hi, 0.0 si, 0.0 st
The system numbers seem extremely high. I'd expect them to be a few
percent at maximum.
Do I assume correctly that this is a result of our cache flushing
problems? The CPUs I have are the PA8900s with 64MB cache. dmesg shows
Whole cache flush 9590519 cycles, flushing 11534336 bytes 8720637 cycles
Cache flush threshold set to 12387 KiB
Whole TLB flush 19805 cycles, flushing 11534336 bytes 1825128 cycles
TLB flush threshold set to 492 KiB
The numbers are of some concern but I don't think they arise from our
cache flushing.
We only flush what's required. It does take a lot of cycles to flush
the entire cache on
the PA8900.
Of more concern to me is the TLB flushing. It takes about 570 cycles to
do one pdtlb
instruction on rp3440. Whacking the whole TLB slows all CPUs due to the
time needed
to reload entries. This gets worse as the number of CPUs increases. We
might improve
performance by doing local range flushes. It would help if we knew
which CPUs a vma
was used on.
I think another big contributor to the high system numbers is memory
allocation/deallocation.
I think our gcc build and test time is comparable or better than on
hpux. However, I think
we are about 30% behind say alpha on package build times.
Dave
--
John David Anglin dave.anglin@xxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-parisc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html