Hi guys, Is there any canonical or even rough, old documentation (I noticed a connector/encoder structure change lately) on how the DRM crtc/encoder/connector framework is constructed and what the responsibilities of each component are in a display driver? The Intel driver is a little overcomplicated to derive from for the work I have in mind, which is an embedded graphics driver. There is no setting up i2c buses on weird chips (it's a real i2c bus) and there would have to be platform_device support on init which is an easy addition, but without a good non-code description of what goes on where (in-code comments are not that great), I am not sure I really understand enough how it is all meant to operate. What we have is an i.MX515 with a display controller (IPU) which can do video overlay and all that fancy stuff. This is the CRTC right? This would be setting up clocks, and managing which GEM object is the current framebuffer? Implementing a GEM memory manager to allocate framebuffers and other objects goes under this, and then the CRTC talks to it's encoder.. but how does the connector come in? All I see is that it is a sort of abstraction for parsing EDID data such that you can find out if you're running HDMI ("TV mode" on a TV) or DVI-like displays ("PC mode" on a TV). Does that about sum it up? Is there a simple skeleton (maybe a virtual or off-screen framebuffer?) implementing the framework so I can relieve myself of having to pick apart Intel and Radeon drivers and working out for myself why and how they differ in implementation? BTW the other question is: for future-proofing do I use GEM or TTM or what? -- Matt Sealey <matt at genesi-usa.com> Product Development Analyst, Genesi USA, Inc.