> Subject: Re: [PATCH 03/10] remoteproc: imx: use devm_ioremap > > On Mon, Jul 27, 2020 at 06:28:20AM +0000, Peng Fan wrote: > > Hi Oleksij, > > > > > Subject: Re: [PATCH 03/10] remoteproc: imx: use devm_ioremap > > > > > > On Fri, Jul 24, 2020 at 04:08:06PM +0800, Peng Fan wrote: > > > > We might need to map an region multiple times, becaue the region > > > > might be shared between remote processors, such i.MX8QM with dual > M4 cores. > > > > So use devm_ioremap, not devm_ioremap_resource. > > > > > > Can you please give an example of this kind of shared resources and > > > how they should be handled by two separate devices? > > > > This is to share vdevbuffer space, there is a vdevbuffer in device > > tree, it will be shared between M4_0 and M4_1. > > > > For the buffer, it is Linux DMA API will handle the space. > > Why remoteproc need to care about it? If I see it correctly, from the linux > perspective, it is one buffer and one driver is responsible for it. Or do I missing > some thing? We not have the vdev buffer in resource table, so I added in device tree, see below: imx8qm_cm40: imx8qm_cm4@0 { compatible = "fsl,imx8qm-cm4"; rsc-da = <0x90000000>; mbox-names = "tx", "rx", "rxdb"; mboxes = <&lsio_mu5 0 1 &lsio_mu5 1 1 &lsio_mu5 3 1>; mub-partition = <3>; memory-region = <&vdev0vring0>, <&vdev0vring1>, <&vdevbuffer>, <&vdev1vring0>, <&vdev1vring1>; core-index = <0>; core-id = <IMX_SC_R_M4_0_PID0>; status = "okay"; power-domains = <&pd IMX_SC_R_M4_0_PID0>, <&pd IMX_SC_R_M4_0_MU_1A>; }; imx8qm_cm41: imx8x_cm4@1 { compatible = "fsl,imx8qm-cm4"; rsc-da = <0x90100000>; mbox-names = "tx", "rx", "rxdb"; mboxes = <&lsio_mu6 0 1 &lsio_mu6 1 1 &lsio_mu6 3 1>; mub-partition = <4>; memory-region = <&vdev2vring0>, <&vdev2vring1>, <&vdevbuffer>, <&vdev3vring0>, <&vdev3vring1>; core-index = <1>; core-id = <IMX_SC_R_M4_1_PID0>; status = "okay"; power-domains = <&pd IMX_SC_R_M4_1_PID0>, <&pd IMX_SC_R_M4_1_MU_1A>; }; vdevbuffer: vdevbuffer { compatible = "shared-dma-pool"; reg = <0 0x90400000 0 0x100000>; no-map; }; I have the upper vdevbuffer node shared between M40 and M41 node. The vdevbuffer will be used as virtio data buffer. And I have the following in rproc_add_virtio_dev to share vdevbuffer: /* Try to find dedicated vdev buffer carveout */ mem = rproc_find_carveout_by_name(rproc, "vdev%dbuffer", rvdev->index); if (!mem) mem = rproc_find_carveout_by_name(rproc, "vdevbuffer"); Hope this is clear. Thanks, Peng. > > > Thanks, > > Peng. > > > > > > > > > Reviewed-by: Richard Zhu <hongxing.zhu@xxxxxxx> > > > > Signed-off-by: Peng Fan <peng.fan@xxxxxxx> > > > > --- > > > > drivers/remoteproc/imx_rproc.c | 5 +++-- > > > > 1 file changed, 3 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/drivers/remoteproc/imx_rproc.c > > > > b/drivers/remoteproc/imx_rproc.c index 3b3904ebac75..82594a800a1b > > > > 100644 > > > > --- a/drivers/remoteproc/imx_rproc.c > > > > +++ b/drivers/remoteproc/imx_rproc.c > > > > @@ -296,9 +296,10 @@ static int imx_rproc_addr_init(struct > > > > imx_rproc > > > *priv, > > > > if (b >= IMX7D_RPROC_MEM_MAX) > > > > break; > > > > > > > > - priv->mem[b].cpu_addr = > devm_ioremap_resource(&pdev->dev, > > > &res); > > > > + /* Not use resource version, because we might share region*/ > > > > + priv->mem[b].cpu_addr = devm_ioremap(&pdev->dev, > res.start, > > > > +resource_size(&res)); > > > > if (IS_ERR(priv->mem[b].cpu_addr)) { > > > > - dev_err(dev, "devm_ioremap_resource failed\n"); > > > > + dev_err(dev, "devm_ioremap %pR failed\n", &res); > > > > err = PTR_ERR(priv->mem[b].cpu_addr); > > > > return err; > > > > } > > > > -- > > > > 2.16.4 > > > > > > > > > > > > > > -- > > > Pengutronix e.K. | > > > | > > > Steuerwalder Str. 21 | > > > http://www.pengutronix.de/ | > > > 31137 Hildesheim, Germany | Phone: > > > +49-5121-206917-0 | > > > Amtsgericht Hildesheim, HRA 2686 | Fax: > > > +49-5121-206917-5555 | > > -- > Pengutronix e.K. | > | > Steuerwalder Str. 21 | > http://www.pengutronix.de/ | > 31137 Hildesheim, Germany | Phone: > +49-5121-206917-0 | > Amtsgericht Hildesheim, HRA 2686 | Fax: > +49-5121-206917-5555 |