Re: [RFC 1/1] KVM: selftests: rseq_test: use vdso_getcpu() instead of syscall()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 03, 2022, Gavin Shan wrote:
> On 11/3/22 8:46 AM, Sean Christopherson wrote:
> > On Wed, Nov 02, 2022, Robert Hoo wrote:
> > > @@ -253,7 +269,7 @@ int main(int argc, char *argv[])
> > >   			 * across the seq_cnt reads.
> > >   			 */
> > >   			smp_rmb();
> > > -			sys_getcpu(&cpu);
> > > +			vdso_getcpu(&cpu, NULL, NULL);
> > >   			rseq_cpu = rseq_current_cpu_raw();
> > >   			smp_rmb();
> > >   		} while (snapshot != atomic_read(&seq_cnt));
> > 
> > Something seems off here.  Half of the iterations in the migration thread have a
> > delay of 5+us, which should be more than enough time to complete a few getcpu()
> > syscalls to stabilize the CPU.
> > 
> > Has anyone tried to figure out why the vCPU thread is apparently running slow?
> > E.g. is KVM_RUN itself taking a long time, is the task not getting scheduled in,
> > etc...  I can see how using vDSO would make the vCPU more efficient, but I'm
> > curious as to why that's a problem in the first place.
> > 
> > Anyways, assuming there's no underlying problem that can be solved, the easier
> > solution is to just bump the delay in the migration thread.  As per its gigantic
> > comment, the original bug reproduced with up to 500us delays, so bumping the min
> > delay to e.g. 5us is acceptable.  If that doesn't guarantee the vCPU meets its
> > quota, then something else is definitely going on.
> > 
> 
> I doubt if it's still caused by busy system as mentioned previously [1]. At least,
> I failed to reproduce the issue on my ARM64 system until some workloads are enforced
> to hog CPUs.

Yeah, I suspect something else as well.  My best guest at this point is mitigations,
I'll test that tomorrow to see if it makes any difference.

> Looking at the implementation syscall(NR_getcpu), it's simply to copy
> the per-cpu data from kernel to userspace. So I don't see it should consume lots
> of time. As system call is handled by interrupt/exception, the time consumed by
> the interrupt/exception handler should be architecture dependent. Besides, the time
> needed by ioctl(KVM_RUN) also differs on architectures.

Yes, but Robert is seeing problems on x86-64 that I have been unable to reproduce,
i.e. this isn't an architectural difference problem.

> [1] https://lore.kernel.org/kvm/d8290cbe-5d87-137a-0633-0ff5c69d57b0@xxxxxxxxxx/
> 
> I think Sean's suggestion to bump the delay to 5us would be the quick fix if it helps.
> However, more time will be needed to complete the test. Sean, do you mind to reduce
> NR_TASK_MIGRATIONS from 100000 to 20000 either?

I don't think the number of migrations needs to be cut by 5x, the +5us bump only
changes the average from ~5us (to ~7.5us).

But before we start mucking with the delay, I want to at least understand _why_
a lower bound of 1us is insufficient.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux