On Mon, Mar 13, 2017 at 06:28:16PM +0100, Radim Krčmář wrote: > 2017-03-08 02:57-0800, Christoffer Dall: > > Hi Paolo, > > > > I'm looking at improving KVM/ARM a bit by calling guest_exit_irqoff > > before enabling interrupts when coming back from the guest. > > > > Unfortunately, this appears to mess up my view of CPU usage using > > something like htop on the host, because it appears all time is spent > > inside the kernel. > > > > From my analysis, I think this is because we never handle any interrupts > > before enabling interrupts, where the x86 code does its > > handle_external_intr, and the result on ARM is that we never increment > > jiffies before doing the vtime accounting. > > (Hm, the counting might be broken on nohz_full then.) > Don't you still have a scheduler tick even with nohz_full and something that will eventually update jiffies then? > > So my current idea is to increment jiffies according to the clocksource > > before calling guest_exit_irqoff, but this would require some main > > clocksource infrastructure changes. > > This seems similar to calling the function from the timer interrupt. > The timer interrupt would be delivered after that and only wasted time, > so it might actually be slower than just delivering it before ... That's assuming that the timer interrupt hits at every exit. I don't think that's the case, but I should measure it. > > How expensive is the interrupt enable/disable cycle that this > optimization saves? I'll have to go back and measure this bit specifically again, but I recall it being a couple of hundred cycles. Not alariming, but worthwhile looking into. > > > My question is: how important is the vtime accounting on the host from > > your point of view? > > No idea. I'd keep the same behavior on all architectures, though. > > The precision of accounting is in jiffies (millions of cycles), so we > could maybe move it from the hot path to vcpu_load/put(?) without > affecting the count in usual cases ... > So since sending my original e-mail I found out that the vtime accounting logic was changed from ktime to jiffies, which is partly why we're having problems on arm. See: ff9a9b4c4334b53b52ee9279f30bd5dd92ea9bdd sched, time: Switch VIRT_CPU_ACCOUNTING_GEN to jiffy granularity Moving to load/put depends on the semantics of this vtime thing. Is this counting cycles spent in the VM as opposed to in the host kernel and IRQ handling, and is that useful for system profiling or scheduling decisions, in which case moving to vcpu_load/put doesn't work... I assume there's a good reason why we call guest_enter() and guest_exit() in the hot path on every KVM architecture? > > Worth poking the timekeeping folks about or even > > trying to convince ourselves that the handle_external_intr thing is > > worth it? > > handle_external_intr() needs some hardware support in order to be more > than worthless 'local_irq_enable(); local_irq_disable()' ... > > e.g. VMX doesn't queue the interrupt that caused a VM exit in the > interrupt controller. (VMX's handle_external_intr() doesn't deliver any > other interrupts than might have become pending after the exit, but this > race is not very important due to accounting granularity ...) > Alternatively, the interrupt controller would need to support dequeing > without delivery (but delivering in software might still be slower than > cyclying interrupts on and off). > On ARM, I think the main benefits of implementing something like handle_external_intr would come from two things: (1) You'd avoid the context synchronization and associated cost of taking an exception on the CPU, and (2) you'd also (potentially) avoid the additional save/restore of all the GP regiters from the kernel exception entry path to create a usable gp_regs. I have to look more careful at whether or not (2) is possible, because it would mean we'd have to store the guest register state on a pt_regs structure in the first place and pass that directly to arch_handle_irq. Additionally, if we had something like handle_external_intr, the guest_exit thing would be kinda moot, since we'd do our ticks like x86... Thanks, -Christoffer