Re: [PATCH] dmaengine: cppi41: Fix oops in cppi41_runtime_resume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 01/13/2017 10:17 AM, Tony Lindgren wrote:
> * Tony Lindgren <tony@xxxxxxxxxxx> [170112 16:04]:
>> * Grygorii Strashko <grygorii.strashko@xxxxxx> [170112 15:43]:
>>> @@ -457,38 +449,36 @@ static void push_desc_queue(struct cppi41_channel *c)
>>>         cppi_writel(reg, cdd->qmgr_mem + QMGR_QUEUE_D(c->q_num));
>>>  }
>>>  
>>> -static void pending_desc(struct cppi41_channel *c)
>>> +static void cppi41_dma_issue_pending(struct dma_chan *chan)
>>>  {
>>> +       struct cppi41_channel *c = to_cpp41_chan(chan);
>>>         struct cppi41_dd *cdd = c->cdd;
>>> +       int error;
>>> +       struct cppi41_channel *_c;
>>>         unsigned long flags;
>>>  
>>>         spin_lock_irqsave(&cdd->lock, flags);
>>>         list_add_tail(&c->node, &cdd->pending);
>>> -       spin_unlock_irqrestore(&cdd->lock, flags);
>>> -}
>>> -
>>> -static void cppi41_dma_issue_pending(struct dma_chan *chan)
>>> -{
>>> -       struct cppi41_channel *c = to_cpp41_chan(chan);
>>> -       struct cppi41_dd *cdd = c->cdd;
>>> -       int error;
>>>  
>>>         error = pm_runtime_get(cdd->ddev.dev);
>>> -       if ((error != -EINPROGRESS) && error < 0) {
>>> +       if (error < 0) {
>>>                 pm_runtime_put_noidle(cdd->ddev.dev);
>>>                 dev_err(cdd->ddev.dev, "Failed to pm_runtime_get: %i\n",
>>>                         error);
>>> -
>>> +               spin_unlock_irqrestore(&cdd->lock, flags);
>>>                 return;
>>>         }
>>>  
>>> -       if (likely(pm_runtime_active(cdd->ddev.dev)))
>>> -               push_desc_queue(c);
>>> -       else
>>> -               pending_desc(c);
>>> +       if (!cdd->is_suspended) {
>>> +               list_for_each_entry_safe(c, _c, &cdd->pending, node) {
>>> +                       push_desc_queue(c);
>>> +                       list_del(&c->node);
>>> +               };
>>> +       }
>>>  
>>>         pm_runtime_mark_last_busy(cdd->ddev.dev);
>>>         pm_runtime_put_autosuspend(cdd->ddev.dev);
>>> +       spin_lock_irqsave(&cdd->lock, flags);
>>>  }
>>
>> So always add to the queue no matter, then always flush the queue
>> directly if active? Yeah that might work :)
>>
>> Don't we need spinlock in the list_for_each_entry_safe() to prevent
>> it racing with runtime_resume() though?
> 
> I could not apply for me as looks like your mail server replaced tabs
> with spaces it seems :(

Sorry.

> 
> But anyways here's your basic idea plugged into my recent patch and
> it seems to work. I maybe have missed something though while reading
> so please check.
> 
> The pm_runtime_get/mark_last_busy/put_autosuspend and WARN_ON() we
> want to keep in cppi41_irq() at least for now :)

As per my understanding and testing it looks like might be enough to
have just pm_runtime_mark_last_busy(cdd->ddev.dev); in cppi41_irq().
But it can be left as is and yes - over all idea is that irq should
not be triggered if device is Idle.

> 
> And I'm thinking we must also callcppi41_run_queue() with spinlock
> held to prevent out of order triggering of the DMA transfers.
> 
> Does this look OK to you?
> 

I think yes. My current version is mostly similar to yours.

> 
> 8< -----------------------
> diff --git a/drivers/dma/cppi41.c b/drivers/dma/cppi41.c
> index d5ba43a87a68..6ee593eb2acb 100644
> --- a/drivers/dma/cppi41.c
> +++ b/drivers/dma/cppi41.c
> @@ -153,6 +153,8 @@ struct cppi41_dd {
>  
>  	/* context for suspend/resume */
>  	unsigned int dma_tdfdq;
> +
> +	bool is_suspended;
>  };
>  
>  #define FIST_COMPLETION_QUEUE	93
> @@ -316,11 +318,12 @@ static irqreturn_t cppi41_irq(int irq, void *data)

[..]

>  
>  	pm_runtime_mark_last_busy(cdd->ddev.dev);
>  	pm_runtime_put_autosuspend(cdd->ddev.dev);
> @@ -1150,6 +1165,11 @@ static int __maybe_unused cppi41_resume(struct device *dev)
>  static int __maybe_unused cppi41_runtime_suspend(struct device *dev)
>  {
>  	struct cppi41_dd *cdd = dev_get_drvdata(dev);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&cdd->lock, flags);
> +	cdd->is_suspended = true;
> +	spin_unlock_irqrestore(&cdd->lock, flags);
>  
>  	WARN_ON(!list_empty(&cdd->pending));

Shouldn't we check list_empty() under spin lock?

>  
> @@ -1159,14 +1179,11 @@ static int __maybe_unused cppi41_runtime_suspend(struct device *dev)
>  static int __maybe_unused cppi41_runtime_resume(struct device *dev)
>  {
>  	struct cppi41_dd *cdd = dev_get_drvdata(dev);
> -	struct cppi41_channel *c, *_c;
>  	unsigned long flags;
>  
>  	spin_lock_irqsave(&cdd->lock, flags);
> -	list_for_each_entry_safe(c, _c, &cdd->pending, node) {
> -		push_desc_queue(c);
> -		list_del(&c->node);
> -	}
> +	cdd->is_suspended = false;
> +	cppi41_run_queue(cdd);
>  	spin_unlock_irqrestore(&cdd->lock, flags);
>  
>  	return 0;
> 

-- 
regards,
-grygorii
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Arm (vger)]     [ARM Kernel]     [ARM MSM]     [Linux Tegra]     [Linux WPAN Networking]     [Linux Wireless Networking]     [Maemo Users]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux