Controlling the guest TSC on x86

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good afternoon!

We are looking at the use of KVM on x86 to emulate an x86 processor in a Virtual Prototyping/SystemC context. The requirements are such that the guest OS should be ideally run unmodified (i.e., in this case ideally without any drivers that know and exploit the fact that the guest is not running on real HW but as a KVM guest). 

For this, we also would like to control the TSC (as observed by the guest via rdtsc and related instructions) in such a way that time is apparently stopped whenever the guest is not actively executing in KVM_RUN.

I must admit that I am confused by the multitude of mechanism and MSRs that are available in this context. So, how would one best achieve to (approximately) stop the increment of the TSC when the guest is not running. If this is important, we are also not using the in-chip APIC but are using our own SystemC models. Also, are there extra considerations when running multiple virtual processors?

Any pointers would be greatly appreciated!

Thanks and best regards

Stephan Tobies





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux