Hi Peter, On Thu, Jul 09, 2020 at 04:07:38PM +0300, Peter Ujfalusi wrote: > On 08/07/2020 23.19, Laurent Pinchart wrote: > > A few virt-dma functions are documented as requiring the vc.lock to be > > held by the caller. Check this with lockdep. > > > > The vchan_vdesc_fini() and vchan_find_desc() functions gain a lockdep > > vchan_vdesc_fini() is used outside of held vc->lock via vchan_complete() > and the customized local re-implementation of it in ti/k3-udma.c > > This patch did not adds the lockdep_assert_held() to the _fini. > The vchan_complete() can be issue only in case when the descriptor is > set to DMA_CTRL_REUSE. I'll drop the patch completely, I don't need it for this series. I still think it's useful though, so if someone wants to pick it up and fix it, please don't hesitate. > > check as well, because, even though they are not documented with this > > requirement (and not documented at all for the latter), they touch > > fields documented as protected by vc.lock. All callers have been > > manually inspected to verify they call the functions with the lock held. > > > > Signed-off-by: Laurent Pinchart <laurent.pinchart@xxxxxxxxxxxxxxxx> > > --- > > drivers/dma/virt-dma.c | 2 ++ > > drivers/dma/virt-dma.h | 10 ++++++++++ > > 2 files changed, 12 insertions(+) > > > > diff --git a/drivers/dma/virt-dma.c b/drivers/dma/virt-dma.c > > index 23e33a85f033..1cb36ab3d9c1 100644 > > --- a/drivers/dma/virt-dma.c > > +++ b/drivers/dma/virt-dma.c > > @@ -68,6 +68,8 @@ struct virt_dma_desc *vchan_find_desc(struct virt_dma_chan *vc, > > { > > struct virt_dma_desc *vd; > > > > + lockdep_assert_held(&vc->lock); > > + > > list_for_each_entry(vd, &vc->desc_issued, node) > > if (vd->tx.cookie == cookie) > > return vd; > > diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h > > index e9f5250fbe4d..59d9eabc8b67 100644 > > --- a/drivers/dma/virt-dma.h > > +++ b/drivers/dma/virt-dma.h > > @@ -81,6 +81,8 @@ static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan > > */ > > static inline bool vchan_issue_pending(struct virt_dma_chan *vc) > > { > > + lockdep_assert_held(&vc->lock); > > + > > list_splice_tail_init(&vc->desc_submitted, &vc->desc_issued); > > return !list_empty(&vc->desc_issued); > > } > > @@ -96,6 +98,8 @@ static inline void vchan_cookie_complete(struct virt_dma_desc *vd) > > struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan); > > dma_cookie_t cookie; > > > > + lockdep_assert_held(&vc->lock); > > + > > cookie = vd->tx.cookie; > > dma_cookie_complete(&vd->tx); > > dev_vdbg(vc->chan.device->dev, "txd %p[%x]: marked complete\n", > > @@ -146,6 +150,8 @@ static inline void vchan_terminate_vdesc(struct virt_dma_desc *vd) > > { > > struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan); > > > > + lockdep_assert_held(&vc->lock); > > + > > list_add_tail(&vd->node, &vc->desc_terminated); > > > > if (vc->cyclic == vd) > > @@ -160,6 +166,8 @@ static inline void vchan_terminate_vdesc(struct virt_dma_desc *vd) > > */ > > static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc) > > { > > + lockdep_assert_held(&vc->lock); > > + > > return list_first_entry_or_null(&vc->desc_issued, > > struct virt_dma_desc, node); > > } > > @@ -177,6 +185,8 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc) > > static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc, > > struct list_head *head) > > { > > + lockdep_assert_held(&vc->lock); > > + > > list_splice_tail_init(&vc->desc_allocated, head); > > list_splice_tail_init(&vc->desc_submitted, head); > > list_splice_tail_init(&vc->desc_issued, head); -- Regards, Laurent Pinchart