Re: [PATH] Correct GPU timestamp read

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 22, 2014 at 06:22:53PM +0200, Jacek Danecki wrote:
> Current implementation of reading GPU timestamp is broken.
> It returns lower 32 bits shifted by 32 bits (XXXXXXXX00000000 instead of YYYYYYYYXXXXXXXX).
> Below change is adding possibility to read hi part of that register separately.
> 
> Signed-off-by: Jacek Danecki jacek.danecki@xxxxxxxxx

The problem is that beignet already works around the broken hw read
whereas mesa does not. If we apply the fix in the kernel we break the
one user of it in beignet but fix all the existing users of mesa.

The userspace workaround is effectively:

  u64 v = reg_read(TIMESTAMP);
  if (lower_32_bits(v) == 0) {
	  v >>= 32;
	  v |= reg_read(TIMESTAMP + 4) << 32;
  }

Our ABI says read 8 bytes from this location. I am not sure if says
anything about what to do if the hardware is broken, or what that value
means. Already that value depends upon generation and architecture, e.g.
on x86-32 this is done as 2 readl, but on x86-64 a single readq.

The question comes down to fix mesa and break beignet, or do nothing.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux