Re: [PATCH v2 0/9] KVM: arm/arm64: vgic: ITS translation cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Andre,

On 23/07/2019 12:14, Andre Przywara wrote:
> On Tue, 11 Jun 2019 18:03:27 +0100
> Marc Zyngier <marc.zyngier@xxxxxxx> wrote:
> 
> Hi,
> 
>> It recently became apparent[1] that our LPI injection path is not as
>> efficient as it could be when injecting interrupts coming from a VFIO
>> assigned device.
>>
>> Although the proposed patch wasn't 100% correct, it outlined at least
>> two issues:
>>
>> (1) Injecting an LPI from VFIO always results in a context switch to a
>>     worker thread: no good
>>
>> (2) We have no way of amortising the cost of translating a DID+EID pair
>>     to an LPI number
>>
>> The reason for (1) is that we may sleep when translating an LPI, so we
>> do need a context process. A way to fix that is to implement a small
>> LPI translation cache that could be looked up from an atomic
>> context. It would also solve (2).
>>
>> This is what this small series proposes. It implements a very basic
>> LRU cache of pre-translated LPIs, which gets used to implement
>> kvm_arch_set_irq_inatomic. The size of the cache is currently
>> hard-coded at 16 times the number of vcpus, a number I have picked
>> under the influence of Ali Saidi. If that's not enough for you, blame
>> me, though.
>>
>> Does it work? well, it doesn't crash, and is thus perfect. More
>> seriously, I don't really have a way to benchmark it directly, so my
>> observations are only indirect:
>>
>> On a TX2 system, I run a 4 vcpu VM with an Ethernet interface passed
>> to it directly. From the host, I inject interrupts using debugfs. In
>> parallel, I look at the number of context switch, and the number of
>> interrupts on the host. Without this series, I get the same number for
>> both IRQ and CS (about half a million of each per second is pretty
>> easy to reach). With this series, the number of context switches drops
>> to something pretty small (in the low 2k), while the number of
>> interrupts stays the same.
>>
>> Yes, this is a pretty rubbish benchmark, what did you expect? ;-)
>>
>> So I'm putting this out for people with real workloads to try out and
>> report what they see.
> 
> So I gave that a shot with some benchmarks. As expected, it is quite hard
> to show an improvement with just one guest running, although we could show
> a 103%(!) improvement of the memcached QPS score in one experiment when
> running it in a guest with an external load generator.

Is that a fluke or something that you have been able to reproduce
consistently? Because doubling the performance of anything is something
I have a hard time believing in... ;-)

> Throwing more users into the game showed a significant improvement:
> 
> Benchmark 1: kernel compile/FIO: Compiling a kernel on the host, while
> letting a guest run FIO with 4K randreads from a passed-through NVMe SSD:
> The IOPS with this series improved by 27% compared to pure mainline,
> reaching 80% of the host value. Kernel compilation time improved by 8.5%
> compared to mainline.

OK, that's interesting. I guess that's the effect of not unnecessarily
disrupting the scheduling with one extra context-switch per interrupt.

> 
> Benchmark 2: FIO/FIO: Running FIO on a passed through SATA SSD in one
> guest, and FIO on a passed through NVMe SSD in another guest, at the same
> time:
> The IOPS with this series improved by 23% for the NVMe and 34% for the
> SATA disk, compared to pure mainline.

I guess that's the same thing. Not context-switching means more
available resource to other processes in the system.

> So judging from these results, I think this series is a significant
> improvement, which justifies it to be merged, to receive wider testing.
> 
> It would be good if others could also do performance experiments and post
> their results.

Wishful thinking...

Anyway, I'll repost the series shortly now that Eric has gone through it.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux