Re: hyperthreading and RT latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alison,

You've already had a number of answers to your query already, but I'll
chime in with my two cents anyhow.

Although I am no longer in a position to provide actual data (having
left my previous employer a few weeks back, where I ran some
experiments for myself), my findings were similar in nature to those
of Jonathan (on a Gen 10 HPE ProLiant server, with a 5.x-rt kernel).

For our application, the 'random' latencies induced by the enabling of
hyper-threading were deemed acceptable and, given our overall
configuration of the target system, allowed for more satisfactory
performance across the range of applications and tasks running on the
system overall.  Hence, we kept it enabled despite the additional
latency noise.  So it all comes down to your use-case, the
configuration and particulars of your target system, and your system
performance requirements.

As John has pointed out already, the best thing to do is to try to
quantify any effects of enabling / disabling hyper-threading on your
system and to evaluate the effects with respect to your system
performance requirements.  This advice applies across the board when
trying to determine what impact certain system configuration changes
have on the real-time performance of your system.  The rt-tests suite
of tools may help you to do so.  If you've not used them already, I
*strongly* recommend you get familiar with these tools and their use,
as they are a crucial component of the RT Linux developer toolkit.
The following Wiki page links to some useful information regarding the
use of these tools, as well as other tools useful for RT Linux
development: https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/start.

Jack

On Tue, Aug 10, 2021 at 7:54 AM Jonathan Schwender
<schwenderjonathan@xxxxxxxxx> wrote:
>
> Hi Alison,
>
> > Is the advice still current?   Should we RT-users all still turn hyperthreading off?
>
> I ran some tests as a part of my master's thesis in the beginning of this year with the 5.10-rt kernel on an Intel Broadwell-EP 2-socket server.
> If you are interested, I can dig up the graphs I made, but the jist regarding wake-up latencies measured by _cyclictest_ (24 hours each) is:
> 1. Task-isolation (placing the RT-task on a dedicated core) + Cache allocation + disabled Hyperthreading yields the best latencies. Something around 4-5us worst-case latencies were possibly with some optimizations.
> 2. Placing a load (rteval) together with cyclictest increases the latencies, but worst-case latencies of I think 16us are still okay for many applications
> 3. Isolating a task on a dedicated CPU and placing a load (rteval) on the neighbor CPU sharing the same core yields strictly worse latencies compared to 2). I think it was around 50us worst-case.
> 4. Isolating a task on a dedicated core (hyperthreading disabled), but enabling hyperthreading for the non-critical cores seems to have a rather small negative impact, as long as CAT is used to reserve cache for the isolated core. I'd have to look up the details though.
>
> I don't think the situation has improved on more modern hardware, since AFAIK the SMT hardware has no knowledge of your tasks priority.
>
> >Thanks,
> >Alison Chaiken
>
> Best Regards,
>
> Jonathan Schwender
>




[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux