On Tue, Jun 15, 2021 at 5:20 AM Stephan Tobies <Stephan.Tobies@xxxxxxxxxxxx> wrote: > > Good afternoon! > > We are looking at the use of KVM on x86 to emulate an x86 processor in a Virtual Prototyping/SystemC context. The requirements are such that the guest OS should be ideally run unmodified (i.e., in this case ideally without any drivers that know and exploit the fact that the guest is not running on real HW but as a KVM guest). > > For this, we also would like to control the TSC (as observed by the guest via rdtsc and related instructions) in such a way that time is apparently stopped whenever the guest is not actively executing in KVM_RUN. > > I must admit that I am confused by the multitude of mechanism and MSRs that are available in this context. So, how would one best achieve to (approximately) stop the increment of the TSC when the guest is not running. If this is important, we are also not using the in-chip APIC but are using our own SystemC models. Also, are there extra considerations when running multiple virtual processors? > > Any pointers would be greatly appreciated! > > Thanks and best regards > > Stephan Tobies You can use the VM-exit MSR-save list to save the value of the TSC on VM-exit, and then you can adjust the TSC offset field in the VMCS just before VM-entry to subtract any time the vCPU wasn't in VMX non-root operation. There will be some slop here, since the guest TSC will run during VM-entry and part of VM-exit. However, this is just the beginning of your troubles. It will be impossible to keep the TSCs synchronized across multiple vCPUs this way. Moreover, the TSC time domain will get out of sync with other time domains, such as the APIC time domain and the RTC time domain. Maybe it's enough to report to the guest that CPUID.80000007H:EDX.INVARIANT_TSC[bit 8] is zero, but I suspect you are in for a lot of headaches.