Hi Daniel, Thanks for having a look at it.. >-----Original Message----- >From: Daniel Vetter [mailto:daniel.vetter@xxxxxxxx] On Behalf Of Daniel Vetter >Sent: Wednesday, September 23, 2015 3:14 PM >To: R, Durgadoss >Cc: Jani Nikula; intel-gfx@xxxxxxxxxxxxxxxxxxxxx >Subject: Re: [RFC DP-typeC 0/2] Support USB typeC based DP on BXT > >On Wed, Sep 16, 2015 at 10:57:45AM +0000, R, Durgadoss wrote: >> Hi Jani, >> >> >-----Original Message----- >> >From: Jani Nikula [mailto:jani.nikula@xxxxxxxxxxxxxxx] >> >Sent: Wednesday, September 16, 2015 3:18 PM >> >To: R, Durgadoss; intel-gfx@xxxxxxxxxxxxxxxxxxxxx >> >Cc: R, Durgadoss >> >Subject: Re: [RFC DP-typeC 0/2] Support USB typeC based DP on BXT >> > >> >On Tue, 15 Sep 2015, Durgadoss R <durgadoss.r@xxxxxxxxx> wrote: >> >> This is an RFC series to start the review/discussion on approach >> >> to support USB type C based DP panel. >> >> >> >> To support USB type C alternate DP mode, the display driver needs to >> >> know the number of lanes required by the DP panel as well as number >> >> of lanes that can be supported by the type-C cable. Sometimes, the >> >> type-C cable may limit the bandwidth even if Panel can support >> >> more lanes. >> >> >> >> The goal is to find out the number of lanes which can be supported >> >> using a particular cable so that we can cap 'max_available_lanes' >> >> to that number during modeset. >> >> >> >> These two patches are based on 4.2-rc2 and tested only on >> >> a BXT A1 platform for now. >> >> >> >> Brief summary of the approach taken: >> >> ----------------------------------- >> >> 1.As soon as DP-hotplug is detected, driver starts link training >> >> with highest number of lanes/bandwidth possible. If it fails, >> >> driver retries link training with lane/2 for same bandwidth. >> >> We continue this procedure until we find a working configuration >> >> of lane/bandwidth values. This 'number of lanes' is then >> >> set as the 'max_available_lanes', so that the following >> >> intel_dp_compute_config() during modeset picks it up as >> >> max_lane_count (instead of 4 always, from DPCD). >> > >> >Would all of this work automatically if our link training sequence >> >followed the DP spec to the letter wrt degrading the link on failures? >> >> That is one part of it. >> >> Our intention is also to filter out the modes that cannot be set >> with 'max_available_lanes' through connector->mode_valid >> callback, which uses these variables. Otherwise, we risk failing >> a modeset that uses higher resolutions than possible. >> >> Sorry, I should have also added this as part of the commit message. > >One approach to implement DP link training to the spec is that if things >fail we enable the pipe anyway (since anything else would seriously >surprise userspace, especially for async modesets, and lead to hangs in >userspace if vblank interrupts don't happen). And then we generate a >hotplug event to inform userspace that something change with the monitor >configuration, to give userspace a chance to look at the filtered mode >list and select a new config it likes. > >That approach would fit rather well into the overall framework of how >detection/mode-config changes are done currently by keeping all the policy >for selecting the precise mode config in userspace. Downside is that for >usb type-C it would cause flicker since if we only have 2 lanes we'll >always first try the high-res mode and fail. So I think in the end we need Yes, agreed. >both approaches. Wrt the rfc it would be great if we can make it at least >somewhat platform-agnostic - anything on big core since hsw+ supports By platform-agnostic, do you mean to try and implement _upfront_link_train() for few more platforms since HSW+ to see how we can re-use common code if any ? If it is something else, please elaborate a bit more.. >enabling the DP port without enabling a pipe (because dp mst needs that), >so could support your approach here too. We have this kind of implementation tested in CHV and BXT. Can I consider at least the BXT part as a sample for HSW+ platforms ? Thanks, Durga >-Daniel >-- >Daniel Vetter >Software Engineer, Intel Corporation >http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx