On Fri, Feb 21, 2014 at 10:41:14AM -0500, Alex Deucher wrote: > On Fri, Feb 21, 2014 at 9:46 AM, Ville Syrjälä > <ville.syrjala@xxxxxxxxxxxxxxx> wrote: > > On Fri, Feb 21, 2014 at 02:20:24PM +0000, Sharma, Shashank wrote: > >> Hi Ville, > >> > >> Thanks for your time and comments. > >> I can understand two basic problems what you see in this implementation: > >> > >> 1. The most important issue from my POV is that it can't be part of the atomic modeset. > >> 2. it make the whole API inconsistent. > >> > >> I am not sure if its good to block all current implementation because we have thought something for this in atomic modeset. > >> I think even in atomic modeset we need the core implementation like this, but the interface would be different, which might come in from of a DRM property. > >> So at that time we can use this core implementation as it is, only the interfaces/framework needs to be changed. > >> > >> In this way we can always go ahead with a current implementation, and can just change the interfaces to fit in to the final interface like DRM property in atomic modeset. > >> Or you can suggest us the expected interface, and we can work on modifying that as per expectation. > > > > The exptected interface will be range properties for stuff like > > brightness, contrast etc. controls. There are already such things as > > connector properties, but we're going to want something similar as > > plane or crtc properties. One thing that worries me about such > > properties though is whether we can make them hardware agnostic and > > yet allow userspace precise control over the final image. That is, if we > > map some fixed input range to a hardware specific output range, userspace > > can't know how the actual output will change when the input changes. On > > the other hand if the input is hardware specific, userspace can't know > > what value to put in there to get the expected change on the output side. > > > > For bigger stuff like CSC matrices and gamma ramps we will want to use > > some reasonably well defined blobs. Ie. the internal strucuture of the > > blob has to be documented and it shouldn't contain more than necessary. > > Ie. just the CSC matrix coefficients for one matrix, or just the entries > > for a single gamma ramp. Again ideally we should make the blobs hardware > > agnostic, but still allow precise control over the output data. > > > > I think this is going to involve first going over our hardware features, > > trying to find the common patterns between different generations. If > > there's a way to make something that works across the board for us, or > > at least across a wide range, then we should also ask for some input on > > dri-devel whether the proposed property would work for other people. We > > may need to define new property types to more precisely define what the > > value of the property actually means. > > > > Our hardware has similar features, so I'm sure there will be quite a > bit of common ground. I also vote for properties. Thirded. Tegra should be able to use a hardware-agnostic description of these as well. I wonder if perhaps VESA or some other standard already defines such a format for some of these properties. Thierry
Attachment:
pgpv_ICRQTylX.pgp
Description: PGP signature
_______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx