On 1/12/2025 10:30 PM, John Ogness wrote:
On 2025-01-12, Florian Paul Schmidt <mista.tapas@xxxxxxx> wrote:
[...]
T: 0 (139313) P:95 I:1000 C: 5881 Min: 2 Act: 3 Avg: 3 Max: 10
T: 1 (139314) P:95 I:1500 C: 3920 Min: 1 Act: 1 Avg: 7 Max: 419
T: 2 (139315) P:95 I:2000 C: 2940 Min: 1 Act: 1 Avg: 7 Max: 480
T: 3 (139316) P:95 I:2500 C: 2352 Min: 1 Act: 1 Avg: 9 Max: 433
Notice the average is considerably higher on the "idle" CPUs. Perhaps
you have cpufreq scaling enabled?
I don't think so. Cores are pinned at 2.4 Ghz by way of cpufreq-set -g
performance.
Are you running these tests using the OSADL kernel?
No, that is going to be the next thing I'll try. Sadly something's up
with the generated patch script. The kernel directory resulting from
running it is missing the top-level Kconfig. There's a hunk in the
monsterpatch that deletes it:
Index: linux-6.1.66-rt19-v8-16k/Kconfig
===================================================================
--- linux-6.1.66-rt19-v8-16k.orig/Kconfig
+++ /dev/null
[...]
Lines 246845 and following.
Why? :)
The kernel I am running is:
$ uname -a
Linux ogfx5 6.13.0-rc6-v8-rt2+ #1 SMP PREEMPT_RT Thu Jan 9 17:07:08 CET
2025 aarch64 GNU/Linux
I also get similar results with 6.13.0-rc3 but compiled with 16k pages
and for the pi 5 architecture.
I assume you see the same effect when running stress(1) pinned to CPU1?
... just to be sure the boot CPU is not somehow special. (No need to
boot with isolcpus since the machine is otherwise idle anyway.)
taskset 0x2 stress -m 1 --vm-stride 16 --vm-bytes 512000000 --vm-keep -c 1
sudo cyclictest -m -p 95 -a 1,2,3 -t 3
Yeah, I see the same behaviour with this way of running the test: The
core with the stressor on it shows small latencies and the other huge
ones. The effect disappears once I run two or more stressors, e.g.
taskset 0x4 stress -m 1 --vm-stride 16 --vm-bytes 512000000 --vm-keep -c 1
and
taskset 0x8 stress -m 1 --vm-stride 16 --vm-bytes 512000000 --vm-keep -c 1
Then all cores show huge latencies.
Kind regards,
FPS
--
https://blog.dfdx.eu