Re: [PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 9 Feb 2017 14:55:50 -0800
Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:

> On Thu, Feb 9, 2017 at 12:45 PM, KY Srinivasan <kys@xxxxxxxxxxxxx> wrote:
> >
> >  
> >> -----Original Message-----
> >> From: Thomas Gleixner [mailto:tglx@xxxxxxxxxxxxx]
> >> Sent: Thursday, February 9, 2017 9:08 AM
> >> To: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
> >> Cc: x86@xxxxxxxxxx; Andy Lutomirski <luto@xxxxxxxxxxxxxx>; Ingo Molnar
> >> <mingo@xxxxxxxxxx>; H. Peter Anvin <hpa@xxxxxxxxx>; KY Srinivasan
> >> <kys@xxxxxxxxxxxxx>; Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>; Stephen
> >> Hemminger <sthemmin@xxxxxxxxxxxxx>; Dexuan Cui
> >> <decui@xxxxxxxxxxxxx>; linux-kernel@xxxxxxxxxxxxxxx;
> >> devel@xxxxxxxxxxxxxxxxxxxxxx; virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
> >> Subject: Re: [PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read
> >> method
> >>
> >> On Thu, 9 Feb 2017, Vitaly Kuznetsov wrote:  
> >> > +#ifdef CONFIG_HYPERV_TSCPAGE
> >> > +static notrace u64 vread_hvclock(int *mode)
> >> > +{
> >> > +   const struct ms_hyperv_tsc_page *tsc_pg =
> >> > +           (const struct ms_hyperv_tsc_page *)&hvclock_page;
> >> > +   u64 sequence, scale, offset, current_tick, cur_tsc;
> >> > +
> >> > +   while (1) {
> >> > +           sequence = READ_ONCE(tsc_pg->tsc_sequence);
> >> > +           if (!sequence)
> >> > +                   break;
> >> > +
> >> > +           scale = READ_ONCE(tsc_pg->tsc_scale);
> >> > +           offset = READ_ONCE(tsc_pg->tsc_offset);
> >> > +           rdtscll(cur_tsc);
> >> > +
> >> > +           current_tick = mul_u64_u64_shr(cur_tsc, scale, 64) + offset;
> >> > +
> >> > +           if (READ_ONCE(tsc_pg->tsc_sequence) == sequence)
> >> > +                   return current_tick;  
> >>
> >> That sequence stuff lacks still a sensible explanation. It's fundamentally
> >> different from the sequence counting we do in the kernel, so documentation
> >> for it is really required.  
> >
> > The host is updating multiple fields in this shared TSC page and the sequence number is
> > used to ensure that the guest sees a consistent set values published. If I remember
> > correctly, Xen has a similar mechanism.  
> 
> So what's the actual protocol?  When the hypervisor updates the page,
> does it freeze all guest cpus?  If not, how does it maintain
> atomicity?

The protocol looks a lot like Linux seqlock, but it has an extra protection
which is missing here.

The host needs to update sequence number twice in order to guarantee ordering.
Otherwise it is possible that Host and guest can race.

					Host
						Write offset
						Write scale
						Set tsc_sequence = N
          Guest
		read sequence = N
		Read scale
						Write scale
						Write offset
		
		Read Offset
		Check sequence == N
						Set tsc_sequence = N +1

Look like the current host side protocol is wrong.

The solution that Andi Kleen invented, and I used in seqlock was for the writer to update
sequence at start and end of transaction. If sequence number is odd, then the reader knows
it is looking at stale data.
					Host
						Write offset
						Write scale
						Set tsc_sequence = N (end of transaction)
          Guest
		read sequence = N
		Spin until sequence is even (N is even)
		Read scale
						Set tsc_sequence += 1
						Write scale
						Write offset
		
		Read Offset
		Check sequence == N? (fails is N + 1)
						Set tsc_sequence += 1 (end of transaction)
		read sequence = N+2
		Spin until sequence is even (ie N +2)
		Read scale	
		Read Offset
		Check sequence == N +2? (yes ok).

Also it is faster to just read scale and offset with this loop and save
the reading of TSC and doing multiply until after scale/offset has been acquired.

	


_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel



[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux