On Wed, 2020-06-10 at 21:18 +1000, Stephen Morris wrote: > On 10/6/20 7:56 pm, Patrick O'Callaghan wrote: > > On Tue, 2020-06-09 at 16:52 -0700, Gordon Messmer wrote: > > > On 6/9/20 5:10 AM, Stephen Morris wrote: > > > > I have the following messages in dmesg output, are they indicating a > > > > cpu issue, or are they just indicating that because linux is running > > > > in a vm under windows, and hence sharing the cpu cores with windows > > > > that windows was using core 7 at the time the process checked? > > > Yes, probably something like that. Typically, in order to schedule a > > > virtual machine run time, all of the CPUs that the guest will use must > > > be free simultaneously. As you allot more CPUs to a virtual machine, > > > that becomes harder to schedule, and the guest can experience greater > > > latency between run time. If your host system doesn't have at least 12 > > > CPU cores, I would recommend against allotting 8 to the guest. Fewer > > > cores will be easier to schedule, and may perform better. > > In KVM/QEMU you can also pin specific cores to your VM to prevent > > competition between the host and guest (mainly by avoiding cache > > pollution as I understand it). I do this for Windows gaming under > > Fedora, but I don't know if that's supported in VB on a Windows host. > I think virtualbox has the equivalent in that it has an execution cap to > limit the amount of physical cpu time the virtual cpu is allowed to use, > and by setting the percentage to 100% disables the cap, which I assume > means the vm will not release the cpu back to windows. Not really the same thing in how it works, but it certainly would help. poc _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx