RE: [RESEND,2/5] dmaengine: Add ADM driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Add the DMA engine driver for the QCOM Application Data Mover (ADM) DMA
> controller found in the MSM8x60 and IPQ/APQ8064 platforms.

With minor changes I got this working on MDM9615, using qcom_nand. Below are the changes I had to make, please consider them. Patches for MDM9615 NAND are pending.

> +static struct dma_async_tx_descriptor *adm_prep_slave_sg(struct dma_chan *chan,
> +	struct scatterlist *sgl, unsigned int sg_len,
> +	enum dma_transfer_direction direction, unsigned long flags,
> +	void *context)
> +{
...

> +	/* if using flow control, validate burst and crci values */
> +	if (achan->slave.device_fc) {
> +
> +		blk_size = adm_get_blksize(burst);
> +		if (blk_size < 0) {
> +			dev_err(adev->dev, "invalid burst value: %d\n",
> +				burst);
> +			return ERR_PTR(-EINVAL);
Return NULL here, most DMA clients (including qcom_nand) expect NULL if prep_slave_sg() fails.
> +		}
> +
> +		crci = achan->slave.slave_id & 0xf;
> +		if (!crci || achan->slave.slave_id > 0x1f) {
> +			dev_err(adev->dev, "invalid crci value\n");
> +			return ERR_PTR(-EINVAL);
Ditto above.
> +		}
> +	}
> +
> +	/* iterate through sgs and compute allocation size of structures */
> +	for_each_sg(sgl, sg, sg_len, i) {
> +		if (achan->slave.device_fc) {
> +			box_count += DIV_ROUND_UP(sg_dma_len(sg) / burst,
> +						  ADM_MAX_ROWS);
> +			if (sg_dma_len(sg) % burst)
> +				single_count++;
> +		} else {
> +			single_count += DIV_ROUND_UP(sg_dma_len(sg),
> +						     ADM_MAX_XFER);
> +		}
> +	}
> +
> +	async_desc = kzalloc(sizeof(*async_desc), GFP_NOWAIT);
> +	if (!async_desc)
> +		return ERR_PTR(-ENOMEM);
Ditto above.

> +
> +	if (crci)
> +		async_desc->mux = achan->slave.slave_id & ADM_CRCI_MUX_SEL ?
> +					ADM_CRCI_CTL_MUX_SEL : 0;
> +	async_desc->crci = crci;
> +	async_desc->blk_size = blk_size;
> +	async_desc->dma_len = single_count * sizeof(struct adm_desc_hw_single) +
> +				box_count * sizeof(struct adm_desc_hw_box) +
> +				sizeof(*cple) + 2 * ADM_DESC_ALIGN;
> +
> +	async_desc->cpl = dma_alloc_writecombine(adev->dev, async_desc->dma_len,
> +				&async_desc->dma_addr, GFP_NOWAIT);
Under pressure this allocation might fail, resulting in NAND errors. I handled it using wait_event_timeout() to wait until buffers are available. Either that, or clients such as qcom_nand need to handle this failure.

> +
> +	if (!async_desc->cpl) {
> +		kfree(async_desc);
> +		return ERR_PTR(-ENOMEM);
Return NULL.
> +	}
...
> +}
...

> +static void adm_dma_free_desc(struct virt_dma_desc *vd) {
> +	struct adm_async_desc *async_desc = container_of(vd,
> +			struct adm_async_desc, vd);
> +
> +	dma_free_writecombine(async_desc->adev->dev, async_desc->dma_len,
> +		async_desc->cpl, async_desc->dma_addr);
> +	kfree(async_desc);
Do wake_up() here to signal buffer availability.
> +}

Regards, Zoran
��.n��������+%������w��{.n�����{���)��jg��������ݢj����G�������j:+v���w�m������w�������h�����٥




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux