On Nov 22, 2016 23:49, "Chris Wilson" <chris@xxxxxxxxxxxxxxxxxx> wrote:
>
> On Tue, Nov 22, 2016 at 11:32:38PM +0000, Robert Bragg wrote:
> > Thanks for sending out. It looked good to me, but testing shows a 'divide
> > error'.
> >
> > I haven't double checked, but I think it's because the max OA exponent
> > (31) converted to nanoseconds is > UINT32_MAX with the lower 32bits zero
> > and the do_div denominator argument is only 32bit.
>
> Hmm, I thought do_div() was u64 / u64, but no it is u64 / u32. Looks
> like the appropriate function would be div64_u64().
>
> > It corresponds to a 5 minute period which is a bit silly, so we could
> > reduce the max exponent. A period of UINT32_MAX is about 4 seconds where I
> > can't currently think of a good use case for such a low frequency.
> >
> > Instead of changing the max OA exponent (where the relationship to the
> > period changes for gen9 and may become fuzzy if we start training our view
> > of the gpu timestamp frequency instead of using constants) maybe we should
> > set an early limit on an exponent resulting in a period > UINT32_MAX?
>
> Seems like picking the right function would help!
Or that, yep. Sounds good to me, thanks.
- Robert
> -Chris
>
> --
> Chris Wilson, Intel Open Source Technology Centre
_______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx