On Mon, 2015-04-20 at 13:53 +0300, Purcareata Bogdan wrote: > On 10.04.2015 02:53, Scott Wood wrote: > > On Thu, 2015-04-09 at 10:44 +0300, Purcareata Bogdan wrote: > >> So at this point I was getting kinda frustrated so I decided to measure > >> the time spend in kvm_mpic_write and kvm_mpic_read. I assumed these were > >> the main entry points in the in-kernel MPIC and were basically executed > >> while holding the spinlock. The scenario was the same - 24 VCPUs guest, > >> with 24 virtio+vhost interfaces, only this time I ran 24 ping flood > >> threads to another board instead of netperf. I assumed this would impose > >> a heavier stress. > >> > >> The latencies look pretty ok, around 1-2 us on average, with the max > >> shown below: > >> > >> .kvm_mpic_read 14.560 > >> .kvm_mpic_write 12.608 > >> > >> Those are also microseconds. This was run for about 15 mins. > > > > What about other entry points such as kvm_set_msi() and > > kvmppc_mpic_set_epr()? > > Thanks for the pointers! I redid the measurements, this time for the functions > run with the openpic lock down: > > .kvm_mpic_read_internal (.kvm_mpic_read) 1.664 > .kvmppc_mpic_set_epr 6.880 > .kvm_mpic_write_internal (.kvm_mpic_write) 7.840 > .openpic_msi_write (.kvm_set_msi) 10.560 > > Same scenario, 15 mins, numbers are microseconds. > > There was a weird situation for .kvmppc_mpic_set_epr - its corresponding inner > function is kvmppc_set_epr, which is a static inline. Removing the static inline > yields a compiler crash (Segmentation fault (core dumped) - > scripts/Makefile.build:441: recipe for target 'arch/powerpc/kvm/kvm.o' failed), > but that's a different story, so I just let it be for now. Point is the time may > include other work after the lock has been released, but before the function > actually returned. I noticed this was the case for .kvm_set_msi, which could > work up to 90 ms, not actually under the lock. This made me change what I'm > looking at. kvm_set_msi does pretty much nothing outside the lock -- I suspect you're measuring an interrupt that happened as soon as the lock was released. > So far it looks pretty decent. Are there any other MPIC entry points worthy of > investigation? I don't think so. > Or perhaps a different stress scenario involving a lot of VCPUs > and external interrupts? You could instrument the MPIC code to find out how many loop iterations you maxed out on, and compare that to the theoretical maximum. -Scott -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html