On Thu, 2015-04-09 at 10:44 +0300, Purcareata Bogdan wrote: > So at this point I was getting kinda frustrated so I decided to measure > the time spend in kvm_mpic_write and kvm_mpic_read. I assumed these were > the main entry points in the in-kernel MPIC and were basically executed > while holding the spinlock. The scenario was the same - 24 VCPUs guest, > with 24 virtio+vhost interfaces, only this time I ran 24 ping flood > threads to another board instead of netperf. I assumed this would impose > a heavier stress. > > The latencies look pretty ok, around 1-2 us on average, with the max > shown below: > > .kvm_mpic_read 14.560 > .kvm_mpic_write 12.608 > > Those are also microseconds. This was run for about 15 mins. What about other entry points such as kvm_set_msi() and kvmppc_mpic_set_epr()? -Scott -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html