On Mon, Nov 26, 2012 at 10:21:36AM +0200, Terje Bergström wrote: > On 24.11.2012 21:04, Thierry Reding wrote: > >>> I would really like this to be phased in in more manageable chunks. For > >>> instance syncpoints could be added on top of the existing host1x > >>> infrastructure (the display controllers can make use of it for VBLANK > >>> reporting, right?), followed by channel support for command submission. > > I staged nvhost into 4 patches that each add some functionality: > * Add driver with API to read and increment sync point values > * Add syncpt wait and interrupt infrastructure > * Add channel and client support and related infrastructure (jobs, memmgr) > * Add debug support > > I hope that helps in reviews. Sounds better. > > I'm all for reusing your code. My concern is that the structure of this > > is quite different from what can be found in the L4T downstream kernels > > and therefore some rethinking may be required. > > As long as I can keep nvhost as a separate entity, I can port changes > from upstream to downstream and vice versa and there's no architectural > reason why we couldn't gradually move in downstream kernel to using > tegradrm. > > If we merge host1x management code to tegradrm, there's no logical way > of taking tegradrm into use in downstream. I think I understand what you're saying. While I agree that it might be better to move host1x out of tegra-drm in the long term, in particular because other frameworks like V4L2 will start to use it eventually, I have some reservations about whether it makes sense to do it right away. Furthermore as I understand it, most of the work required to use tegra-drm downstream would involve changes to the userspace components, wouldn't it? None of that would be dependent on where in the kernel the host1x driver resides. Thierry
Attachment:
pgpjNmW5VpJ13.pgp
Description: PGP signature