On 12/28/2012 03:45 PM, Arto Merilainen wrote: > On 12/28/2012 08:47 AM, Mark Zhang wrote: >>> +int tegra_fence_is_valid(const struct tegra_fence *fence) >>> +{ >>> + int valid = fence ? 1 : 0; >>> + valid = valid && fence->id != (uint32_t) -1; >>> + valid = valid && fence->id < 32; >> >> Hardcode here. Assume always has 32 syncpts. >> Change to a micro wrapped with tegra version ifdef will be better. >> > > You are correct. I will fix this. > >>> + return valid; >>> +} >>> + >> [...] >>> + >>> + /* Add fences */ >>> + if (num_fences) { >>> + >>> + tegra_stream_push(stream, >>> + nvhost_opcode_setclass(NV_HOST1X_CLASS_ID, >>> + host1x_uclass_wait_syncpt_r(), num_fences)); >>> + >>> + for (; num_fences; num_fences--, fence++) { >>> + assert(tegra_fence_is_valid(fence)); >> >> This is useless. We already add "1 + num_fences" to num_words above. So >> move this "assert" before adding "1 + num_fences" to num_words makes >> sense. >> > > The assertion checks the validity of a single fence - not if there is > room in the command buffer. > > The goal is to prevent having invalid fences in the command stream. If > this check were not here it would be possible to initialise a fence with > tegra_fence_clear() and put that fence into the stream. My idea is, if one fence is invalid, then we should not count this in "num_words". In current code, if one fence is invalid, then this fence will not be pushed into the command stream, and the "num_words" shows a wrong command buffer size. So I think we should: - validate the fences, remove the invalid fence - update num_words - then you don't need to check fence here - I mean, before push a host1x syncpt wait command into the active buffer of stream. > >>> + >>> + tegra_stream_push(stream, >>> nvhost_class_host_wait_syncpt(fence->id, >>> + fence->value)); >>> + } >>> + } >>> + >>> + if (class_id) >>> + tegra_stream_push(stream, nvhost_opcode_setclass(class_id, >>> 0, 0)); >>> + >>> + return 0; >>> +} >>> + >> [...] >>> + >>> +#endif /* TEGRA_DRMIF_H */ >>> > > - Arto _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel