Bas Nieuwenhuizen <bas@xxxxxxxxxxxxxxxxxxx> writes: >> + end = radv_clock_gettime(CLOCK_MONOTONIC_RAW); >> + >> + uint64_t clock_period = end - begin; >> + uint64_t device_period = DIV_ROUND_UP(1000000, clock_crystal_freq); >> + >> + *pMaxDeviation = MAX2(clock_period, device_period); > > Should this not be a sum? Those deviations can happen independently > from each other, so worst case both deviations happen in the same > direction which causes the magnitude to be combined. This use of MAX2 comes right from one of the issues raised during work on the extension: 8) Can the maximum deviation reported ever be zero? RESOLVED: Unless the tick of each clock corresponding to the set of time domains coincides and all clocks can literally be sampled simutaneously, there isn’t really a possibility for the maximum deviation to be zero, so by convention the maximum deviation is always at least the maximum of the length of the ticks of the set of time domains calibrated and thus can never be zero. I can't wrap my brain around this entirely, but I think that this says that the deviation reported is supposed to only reflect the fact that we aren't sampling the clocks at the same time, and so there may be a 'tick' of error for any sampled clock. If you look at the previous issue in the spec, that actually has the pseudo code I used in this implementation for computing maxDeviation which doesn't include anything about the time period of the GPU. Jason suggested using the GPU period as the minimum value for maxDeviation at the GPU time period to make sure we never accidentally returned zero, as that is forbidden by the spec. We might be able to use 1 instead, but it won't matter in practice as the time it takes to actually sample all of the clocks is far longer than a GPU tick. -- -keith
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel