2016-04-05 14:18+0800, Yang Zhang: > On 2016/4/5 5:00, Rik van Riel wrote: >>Given that delivering a timer to a guest seems to >>involve trapping from the guest to the host, anyway, >>I don't see a downside to your patch. >> >>If that is ever changed (eg. allowing delivery of >>a timer interrupt to a VCPU without trapping to the >>host), we may want to revisit this. > > Posted interrupt helps in this case. Currently, KVM doesn't use PI for lapic > timer is due to same affinity for lapic timer and VCPU. Now, we can change > to use PI for lapic timer. The only concern is what's frequency of timer > migration in upstream Linux? If it is frequently, will it bring additional > cost? It's a scheduler bug if the timer migration frequency would matter. :) Additional costs arise when the timer and VCPU are on two different CPUs. (e.g. if both CPUs are in deep C-state, we wasted one wakeup; the timer would sometimes needs to send an interrupt.) Fine tuned KVM could benefit from having the lapic timer backend on a different physical core, but the general case would need some experience to decide. I think that we'd still want to have timer interrupts on the same physical core if the host didn't have PI, and the fraction of timers that can be injected without a guest entry is important to decide whether PI can make the effort worthwhile. The biggest benefit might come from handling multiple lapic timers in one host interrupt. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html