Hi Vinod, Thank you for your comment. On Fri, 27 Dec 2019 12:04:11 +0530 <vkoul@xxxxxxxxxx> wrote: > On 18-12-19, 09:57, Kunihiko Hayashi wrote: > > This adds external DMA controller driver implemented in Socionext > > UniPhier SoCs. This driver supports DMA_MEMCPY and DMA_SLAVE modes. > > > > Since this driver does not support the the way to transfer size > > unaligned to burst width, 'src_maxburst' or 'dst_maxburst' of > > You mean driver does not support any unaligned bursts? Yes. If transfer size is unaligned to burst size, the final transfer will be overrun. > > > +static int uniphier_xdmac_probe(struct platform_device *pdev) > > +{ > > + struct uniphier_xdmac_device *xdev; > > + struct device *dev = &pdev->dev; > > + struct dma_device *ddev; > > + int irq; > > + int nr_chans; > > + int i, ret; > > + > > + if (of_property_read_u32(dev->of_node, "dma-channels", &nr_chans)) > > + return -EINVAL; > > + if (nr_chans > XDMAC_MAX_CHANS) > > + nr_chans = XDMAC_MAX_CHANS; > > + > > + xdev = devm_kzalloc(dev, struct_size(xdev, channels, nr_chans), > > + GFP_KERNEL); > > + if (!xdev) > > + return -ENOMEM; > > + > > + xdev->nr_chans = nr_chans; > > + xdev->reg_base = devm_platform_ioremap_resource(pdev, 0); > > + if (IS_ERR(xdev->reg_base)) > > + return PTR_ERR(xdev->reg_base); > > + > > + ddev = &xdev->ddev; > > + ddev->dev = dev; > > + dma_cap_zero(ddev->cap_mask); > > + dma_cap_set(DMA_MEMCPY, ddev->cap_mask); > > + dma_cap_set(DMA_SLAVE, ddev->cap_mask); > > + ddev->src_addr_widths = UNIPHIER_XDMAC_BUSWIDTHS; > > + ddev->dst_addr_widths = UNIPHIER_XDMAC_BUSWIDTHS; > > + ddev->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV) | > > + BIT(DMA_MEM_TO_MEM); > > + ddev->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; > > + ddev->max_burst = XDMAC_MAX_WORDS; > > + ddev->device_free_chan_resources = uniphier_xdmac_free_chan_resources; > > + ddev->device_prep_dma_memcpy = uniphier_xdmac_prep_dma_memcpy; > > + ddev->device_prep_slave_sg = uniphier_xdmac_prep_slave_sg; > > + ddev->device_config = uniphier_xdmac_slave_config; > > + ddev->device_terminate_all = uniphier_xdmac_terminate_all; > > + ddev->device_synchronize = uniphier_xdmac_synchronize; > > + ddev->device_tx_status = dma_cookie_status; > > + ddev->device_issue_pending = uniphier_xdmac_issue_pending; > > + INIT_LIST_HEAD(&ddev->channels); > > + > > + for (i = 0; i < nr_chans; i++) { > > + ret = uniphier_xdmac_chan_init(xdev, i); > > + if (ret) { > > + dev_err(dev, > > + "Failed to initialize XDMAC channel %d\n", i); > > + return ret; > > so on error for channel N we leave N-1 channels initialized? The uniphier_xdmac_chan_init() always returns 0, so this error decision can be removed. > > +static int uniphier_xdmac_remove(struct platform_device *pdev) > > +{ > > + struct uniphier_xdmac_device *xdev = platform_get_drvdata(pdev); > > + struct dma_device *ddev = &xdev->ddev; > > + struct dma_chan *chan; > > + int ret; > > + > > + /* > > + * Before reaching here, almost all descriptors have been freed by the > > + * ->device_free_chan_resources() hook. However, each channel might > > + * be still holding one descriptor that was on-flight at that moment. > > + * Terminate it to make sure this hardware is no longer running. Then, > > + * free the channel resources once again to avoid memory leak. > > + */ > > + list_for_each_entry(chan, &ddev->channels, device_node) { > > + ret = dmaengine_terminate_sync(chan); > > + if (ret) > > + return ret; > > + uniphier_xdmac_free_chan_resources(chan); > > terminating sounds okayish but not freeing here. .ree_chan_resources() > should have been called already and that should ensure that termination > is already done... If all transfers are complete, .device_free_chan_resources() should be called. Since _remove() might be called asynchronously, this is post-processing just before transfer completion. Thank you, --- Best Regards, Kunihiko Hayashi