Re: [kvm-unit-tests PATCH] x86: eventinj: Do a real io_delay()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On May 3, 2019, at 10:38 AM, Krish Sadhukhan <krish.sadhukhan@xxxxxxxxxx> wrote:
> 
> 
> On 5/2/19 11:49 AM, nadav.amit@xxxxxxxxx wrote:
>> From: Nadav Amit <nadav.amit@xxxxxxxxx>
>> 
>> There is no guarantee that a self-IPI would be delivered immediately.
>> io_delay() is called after self-IPI is generated but does nothing.
>> Instead, change io_delay() to wait for 10000 cycles, which should be
>> enough on any system whatsoever.
>> 
>> Signed-off-by: Nadav Amit <nadav.amit@xxxxxxxxx>
>> ---
>>  x86/eventinj.c | 5 +++++
>>  1 file changed, 5 insertions(+)
>> 
>> diff --git a/x86/eventinj.c b/x86/eventinj.c
>> index 8064eb9..250537b 100644
>> --- a/x86/eventinj.c
>> +++ b/x86/eventinj.c
>> @@ -18,6 +18,11 @@ void do_pf_tss(void);
>>    static inline void io_delay(void)
>>  {
>> +	u64 start = rdtsc();
>> +
>> +	do {
>> +		pause();
>> +	} while (rdtsc() - start < 10000);
>>  }
>>    static void apic_self_ipi(u8 v)
> 
> Perhaps call delay() (in delay.c) inside of io_delay() OR perhaps replace
> all instances of io_delay() with delay() ?

There is such a mess with this delay(). It times stuff based on number of
pause() invocations. There is an additional implementation in ioapic.c
(which by itself is broken, since there is no compiler barrier).

Let me see what I can do...



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux