If you have cache stalls in the algorithm/benchmark and/or io sections that have to be waited on then hyperthreading will usually help. If the code is a nice tight loop that correctly/full uses the cpu with minimal cache stalls then hyperthreading will hurt. I was doing some benchmarks and kind of accidentally found out that if you run 1 single cpu benchmark on an idle system (36 real cores) disabling hyperthreading on the system and/or pinning the benchmark to a cpu with that cpus specific hypertread being offline (finding the cpu in /sys and echoing 0 to online to disable it) resulted in a consistent measurable speed up (1-2% I think was the amount) on a idle system. The cost of the ht simply existing was a 1-2% of less work being done by the primary thread because the ht caused the primary thread to be inefficient in some manner (stolen cache and/or stolen cpu cycles). On the newer cpus Intel is using thermal throttling to determine how hard it can push the turboboost frequencies, so thermal throttling is expected and is not a concern. And when I was testing several different sockets+memory each cpu/socket seemed to thermal throttle at different frequencies (all well above the rated frequency, but some sockets were always a few % faster than others and a few % higher frequencies before thermal throttling). I also noticed that the frequency the cpu could obtain consistently at the edge of thermal throttling seem to be different (as would be expected) based on the benchmark. Simple benchmarks went all of the way to the max turboboost frequency (+600Mhz), and other benchmarks only allowed +200Mhz over rated before overheating and reducing turboboost. On Wed, Jun 29, 2022 at 6:32 AM George N. White III <gnwiii@xxxxxxxxx> wrote: > > On Wed, Jun 29, 2022 at 5:43 AM Stephen Morris <samorris@xxxxxxxxxxxxxx> wrote: >> >> On 22/6/22 23:54, Matthew Miller wrote: >> > [...] >> >> > Or, `cpu-x` for a GUI view with a lot of detail. >> Thanks Greg. I installed cpu-x and tried all the commands. What makes >> the first two processes difficult from my perspective is the cpu I have >> has 32 treads all of which are the same so the first two processes lists >> all 32. >> I ran the cpu-x bench marks for random numbers and what was interesting >> was the results for 32 threads were only around 16 times the result for >> 1 thread, which is probably to be expected given the cpu has 16 cores. > > > My experience was that disabling hyperthreading didn't reduce throughput. > These multi-core systems generally do better with integer workloads, I think some > have one f.p. unit per core. There can be very counterintuitive performance > changes due to CPU cache issues and communications overhead. My experience > is mostly with I/O intensive workloads. We generally found it best to limit those tasks > to a fraction of the cores so background tasks (job control/monitoring, backups, etc) > didn't stall. After a big effort to make efficient use of all the cores you may > encounter thermal throttling. It was better to adjust the workload to avoid > thermal issues: more consistent thruput and fewer issues with background tasks. > > > -- > George N. White III > > _______________________________________________ > users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx > To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx > Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure