On Wed, 23 Sep 2020 12:08:22 +0000 "Gregorio ." <devim@xxxxxxxxxx> wrote: > You can manually set the CPU affinity (see taskset utility) to decide > the core(s) where to run your process, even if it is already running. I can see how that would work as a manual intervention. But this used to happen automatically as the default for the system. I've been trying to research this but most of the web articles available are about sharing a single core, and ensuring a fair allocation of that core's resources to several tasks. This kind of task migration to random cores while the task is running when there isn't resource contention does not seem to get much mention. The kernel documentation has the same bias. My plan right now is to compile an older kernel, say 5.5 or 5.6 where this was working by default, and see if it is still working. If it is, then it is a kernel change. If it isn't, then something changed in the Fedora configuration. That will narrow down where I have to look. I'm probably not searching correctly because I have only a vague understanding of the terminology used for scheduling tasks. And the interaction of the scheduler and NUMA. When I search the kernel mailing list, the problem isn't a lack of matches, but an overwhelming number of matches. This seems to be a very active area of development. In the end, I might just have to live with it, and use your suggestion to manually shift the task when I am present and think about it. Maybe it isn't really an issue, but it seems to me that for the CPU core in question, it is a lot like over-clocking it. Thanks for your reply. It gave me another thread to pull. _______________________________________________ kernel mailing list -- kernel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to kernel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/kernel@xxxxxxxxxxxxxxxxxxxxxxx