On 2025-01-13, Florian Paul Schmidt <mista.tapas@xxxxxxx> wrote: >>> T: 0 (139313) P:95 I:1000 C: 5881 Min: 2 Act: 3 Avg: 3 Max: 10 >>> T: 1 (139314) P:95 I:1500 C: 3920 Min: 1 Act: 1 Avg: 7 Max: 419 >>> T: 2 (139315) P:95 I:2000 C: 2940 Min: 1 Act: 1 Avg: 7 Max: 480 >>> T: 3 (139316) P:95 I:2500 C: 2352 Min: 1 Act: 1 Avg: 9 Max: 433 > >> I assume you see the same effect when running stress(1) pinned to CPU1? >> ... just to be sure the boot CPU is not somehow special. (No need to >> boot with isolcpus since the machine is otherwise idle anyway.) >> >> taskset 0x2 stress -m 1 --vm-stride 16 --vm-bytes 512000000 --vm-keep -c 1 >> >> sudo cyclictest -m -p 95 -a 1,2,3 -t 3 > > Yeah, I see the same behaviour with this way of running the test: The > core with the stressor on it shows small latencies and the other huge > ones. The effect disappears once I run two or more stressors, e.g. > > taskset 0x4 stress -m 1 --vm-stride 16 --vm-bytes 512000000 --vm-keep -c 1 > > and > > taskset 0x8 stress -m 1 --vm-stride 16 --vm-bytes 512000000 --vm-keep -c 1 > > Then all cores show huge latencies. To me this looks like memory bus contention. I would expect you could reproduce the same behavior with bare metal software. Someone with some experience with this platform would need to speak up. The usefulness of my casual responses has come to an end. ;-) John Ogness