Re: [PATCH v3 1/2] drm/bridge: Add Cadence DSI driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Wed, 20 Sep 2017 15:42:50 +0300
Tomi Valkeinen <tomi.valkeinen@xxxxxx> wrote:

> 
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
> 
> On 20/09/17 15:32, Boris Brezillon wrote:
> > On Wed, 20 Sep 2017 14:55:02 +0300
> > Tomi Valkeinen <tomi.valkeinen@xxxxxx> wrote:
> >   
> >> Hi Boris,
> >>
> >>
> >> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
> >>
> >> On 31/08/17 18:55, Boris Brezillon wrote:  
> >>> Add a driver for Cadence DPI -> DSI bridge.
> >>>
> >>> This driver only support a subset of Cadence DSI bridge capabilities.
> >>>
> >>> Here is a non-exhaustive list of missing features:
> >>>  * burst mode
> >>>  * dynamic configuration of the DPHY based on the
> >>>  * support for additional input interfaces (SDI input)
> >>>
> >>> Signed-off-by: Boris Brezillon <boris.brezillon@xxxxxxxxxxxxxxxxxx>
> >>> ---
> >>> Changes in v3:
> >>> - replace magic values by real timing calculation. The DPHY PLL clock
> >>>   is still hardcoded since we don't have a working DPHY block yet, and
> >>>   this is the piece of HW we need to dynamically configure the PLL
> >>>   rate based on the display refresh rate and the resolution.
> >>> - parse DSI devices represented with the OF-graph. This is needed to
> >>>   support DSI devices controlled through an external bus like I2C or
> >>>   SPI.
> >>> - use the DRM panel-bridge infrastructure to simplify the DRM panel
> >>>   logic
> >>>
> >>> Changes in v2:
> >>> - rebase on v4.12-rc1 and adapt to driver to the drm_bridge API changes
> >>> - return the correct error when devm_clk_get(sysclk) fails
> >>> - add missing depends on OF and select DRM_PANEL in the Kconfig entry
> >>> ---
> >>>  drivers/gpu/drm/bridge/Kconfig    |    9 +
> >>>  drivers/gpu/drm/bridge/Makefile   |    1 +
> >>>  drivers/gpu/drm/bridge/cdns-dsi.c | 1090 +++++++++++++++++++++++++++++++++++++
> >>>  3 files changed, 1100 insertions(+)
> >>>  create mode 100644 drivers/gpu/drm/bridge/cdns-dsi.c    
> >>
> >> We need some power management. At the moment the clocks are kept always
> >> enabled. Those need to be turned off when the IP is not used.  
> > 
> > I can try to move the clk_prepare_enable/disable_unprepare() calls in
> > the bridge->enable/disable() hooks, but I'm not sure the DSI regs
> > content is kept when I disable dsi_p_clk.  
> 
> Yes, context restore has to be handled.
> 
> I'm not sure how different it would be but you could use runtime PM, and
> its resume and suspend callbacks. Then you'd get delayed power-down for
> free, which would prevent suspend, for example, when the bridge is
> disabled, reconfigured and enabled again right away.

As you might already know I'm testing on an emulated system, and I'm
not sure everything is behaving as it will in the final design (once
integrated in a real SoC). I can add support for advanced PM mechanism
but I probably won't be able to test it, so I'd recommend doing the PM
related changes in a follow-up patch (AFAICT, none of the design
choices made in this driver prevent PM optimizations, so it should be
pretty easy to add this afterward).

> 
> >>> +static irqreturn_t cdns_dsi_interrupt(int irq, void *data)
> >>> +{
> >>> +	struct cdns_dsi *dsi = data;
> >>> +	irqreturn_t ret = IRQ_NONE;
> >>> +	u32 flag, ctl;
> >>> +
> >>> +	flag = readl(dsi->regs + DIRECT_CMD_STS_FLAG);
> >>> +	if (flag) {
> >>> +		ctl = readl(dsi->regs + DIRECT_CMD_STS_CTL);
> >>> +		ctl &= ~flag;
> >>> +		writel(ctl, dsi->regs + DIRECT_CMD_STS_CTL);    
> >>
> >> I presume it's the enable/disable bit in STS_CTL that prevents the
> >> interrupt from triggering again, instead of the status flag?  
> > 
> > Yep.
> >   
> >> Just making
> >> sure, because I think on some IPs the status flag has been the one that
> >> triggers the interrupt.
> >>  
> >>> +		complete(&dsi->direct_cmd_comp);
> >>> +		ret = IRQ_HANDLED;
> >>> +	}
> >>> +
> >>> +	return ret;
> >>> +}
> >>> +
> >>> +static ssize_t cdns_dsi_transfer(struct mipi_dsi_host *host,
> >>> +				 const struct mipi_dsi_msg *msg)
> >>> +{
> >>> +	struct cdns_dsi *dsi = to_cdns_dsi(host);
> >>> +	u32 cmd, sts, val, wait = WRITE_COMPLETED, ctl = 0;
> >>> +	struct mipi_dsi_packet packet;
> >>> +	int ret, i, tx_len, rx_len;
> >>> +
> >>> +	ret = mipi_dsi_create_packet(&packet, msg);
> >>> +	if (ret)
> >>> +		return ret;
> >>> +
> >>> +	tx_len = msg->tx_buf ? msg->tx_len : 0;
> >>> +	rx_len = msg->rx_buf ? msg->rx_len : 0;
> >>> +
> >>> +	/* For read operations, the maximum TX len is 2. */    
> >>
> >> Hmm, why is that?  
> > 
> > I don't know, that's what is stated in the spec.
> > Excerpt from the CMD_SIZE field description:
> > 
> > "
> > For read operations, any value
> > written which is larger than 2
> > bytes will be ignored and the
> > command payload will be truncated
> > to 2 bytes.
> > "  
> 
> Hmm ok... In other part ("Direct command usage") it says that for short
> packets the max is 2, but for long packets max is the fifo size.

I guess what they meant here is that read length (rx_len) is bound to
the FIFO size, but when you do a read you first have to send a few
bytes to tell the device which reg/info is being read, and this is
where the tx_len limitation in the read path comes from. Of course
when you send a long packet write command, the tx_len is limited by the
FIFO depth.

Do you know of any read commands that require more than 2 TX bytes?

> 
> >>> +	if (rx_len && tx_len > 2)
> >>> +		return -ENOTSUPP;
> >>> +
> >>> +	/* TX len is limited by the CMD FIFO depth. */
> >>> +	if (tx_len > dsi->direct_cmd_fifo_depth)
> >>> +		return -ENOTSUPP;
> >>> +
> >>> +	/* RX len is limited by the RX FIFO depth. */
> >>> +	if (rx_len > dsi->rx_fifo_depth)
> >>> +		return -ENOTSUPP;
> >>> +
> >>> +	cmd = CMD_SIZE(tx_len) | CMD_VCHAN_ID(msg->channel) |
> >>> +	      CMD_DATATYPE(msg->type);
> >>> +
> >>> +	if (msg->flags & MIPI_DSI_MSG_USE_LPM)
> >>> +		cmd |= CMD_LP_EN;
> >>> +
> >>> +	if (mipi_dsi_packet_format_is_long(msg->type))
> >>> +		cmd |= CMD_LONG;
> >>> +
> >>> +	if (rx_len) {
> >>> +		cmd |= READ_CMD;
> >>> +		wait = READ_COMPLETED_WITH_ERR | READ_COMPLETED;
> >>> +		ctl = READ_EN | BTA_EN;
> >>> +	} else if (msg->flags & MIPI_DSI_MSG_REQ_ACK) {
> >>> +		cmd |= BTA_REQ;
> >>> +		wait = ACK_WITH_ERR_RCVD | ACK_RCVD;
> >>> +		ctl = BTA_EN;
> >>> +	}    
> >>
> >> It's been a while since I worked with DSI, but... Shouldn't there be a
> >> check somewhere that the packet(s) can fit into the blanking intervals?  
> > 
> > Hm, I'm not sure. DSI commands are usually sent when the encoder/bridge
> > is not transmitting video, so in this case we don't have any constraint  
> 
> This is true for setup, but, for example, if backlight control or other
> dynamic display features are supported by the panel, those have to be
> done when the video stream is enabled.

Okay, let's wait for Cadence reply then.
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux