3.14-rt ARM performance regression?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey folks-

We've recently undertaken an upgrade of our kernel from 3.2-rt to
3.14-rt, and have run into a performance regression on our ARM boards.
We're still in the process of trying to isolate what we can, but
hopefully someone's already run into this and has a solution or might
have some useful debugging ideas.

The first test we did was to run cyclictest[1] for comparison:

   3.2.35-rt52
   # Total: 312028761 312028761 624057522
   # Min Latencies: 00010 00011
   # Avg Latencies: 00018 00020
   # Max Latencies: 00062 00066 00066
   # Histogram Overflows: 00000 00000 00000

   3.14.25-rt22
   # Total: 304735655 304735657 609471312
   # Min Latencies: 00013 00013
   # Avg Latencies: 00023 00024
   # Max Latencies: 00086 00083 00086
   # Histogram Overflows: 00000 00000 00000

As you can see, we're seeing a 30%-40% degradation not just max latencies, but
also the minimum/maximum latencies.  The above numbers are with the system
under a network throughput load (iperf), but changing the load seems to have
little impact (and in fact, we see a general slowdown even when otherwise
idle).

The ARM SoC used for testing is the dual core Xilinx Zynq.

We've observed no such degradation on our x86 boards.

Many things have changed in the ARM-world between these releases, but
unfortunately bisection is difficult for us, however, we were able to
give 3.10-rt a try, and 3.10-rt shows the same performance degradation.

We suspected something was up with time accounting, as since 3.2, Zynq gained a
clock driver, and shifted to using the arm_global_timer driver as it's
clocksource.  We've compared register dumps of the clocks, cache, and timers
between kernels, and the hardware appears to be configured the same.  It also
seems that the runtimes of identical code paths appear to run slower in
3.14-rt, as observed by the function tracer and the local ftrace clock; we're
looking to better characterize this.

We did, however, construct a test to validate via an external clock that
clock_nanosleep() was sleeping for as long as it says it was by toggling a
GPIO, sleeping for a small period of time, and toggling again, and validating
via a scope that the duration matched.

The toolchain is the same for both kernels (gcc 4.7.2).

We also brought up 3.14-rt on a BeagleBone Black (also ARM) and compared it's
performance to a 3.8-rt build (bringing up 3.2-rt would require a bit more
effort).  We observed a ~30% degradation on this platform as well.

If anyone has any ideas, please let us know!  Otherwise, we'll follow up with
anything else we discover.

Thanks!
  Josh

1: cyclictest -H 500 -m -S -i 237 -p 98
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux