On 07/23/2015 06:59 PM, Rob Clark wrote: > On Thu, Jul 23, 2015 at 5:02 AM, Stanimir Varbanov > <stanimir.varbanov@xxxxxxxxxx> wrote: >> Hi Rob, >> >> I run into issues with msm drm driver while implementing a test >> application which use v4l2 vidc (venus) decoder driver to decode videos >> and msm drm driver to display the decoded frames. The v4l2 test >> application use msm drm as dmabuf exporter by DRM_IOCTL_MODE_CREATE_DUMB >> and DRM_IOCTL_PRIME_HANDLE_TO_FD from libdrm. >> >> So far the test app is able to decode and display using dmabuf type of >> buffers (so, to honest it is slower than using mmap & memcpy but this is >> another story). The problems start when destroying drm buffers by call >> to DRM_IOCTL_MODE_DESTROY_DUMB ioctl and also when closing dmabuf fd. >> The issues I'm seeing are: >> >> - the first and the major one happens in msm_gem_free_object() where we >> are trying to free sg table which table is already freed by drm_prime >> core in dmabuf .detach callback. In fact the msm_gem_free_object is >> called by dmabuf .release callback which should be happened after >> .detach. I find weird to call sg_table_free in .detach callback and did >> not understand why it is there. > > so, I think in msm_gem_prime_get_sg_table() we need to create a > duplicate sgt.. since dma_buf_map_attachment() (and therefore > ->gem_prime_get_sg_table()) returns ownership of the sgt. So it is > not correct to be returning our own internal copy. > >> - the second one is again in msm_gem.c::put_pages dma_unmap_sg call. >> Some background, vidc (venus) driver use dma-iommu infrastructure copped >> from qcom downstream kernel to abstract iommu map/unmap [1]. >> On the other side when drm driver is exporter it use swiotbl [2] dma map >> operations, dma_map_sg will fill sg->dma_address with phy addresses. So >> when vidc call dma_map_sg the sg dma_address is filled with iova address >> then drm call dma_unmap_sg expecting phy addresses. > > hmm.. so drm_gem_map_dma_buf() should dma_map_sg() using attach->dev > (which should be vidc, not drm). Maybe something unexepected was > happening there since we were incorrectly returning our own sgt (which > had already been dma_map_sg()'d w/ the drm device for cache > maintenance? In fact this is consequence of the first issue. > > Could you try: > > ------------- > diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c > b/drivers/gpu/drm/msm/msm_gem_prime.c > index dd7a7ab..831461b 100644 > --- a/drivers/gpu/drm/msm/msm_gem_prime.c > +++ b/drivers/gpu/drm/msm/msm_gem_prime.c > @@ -23,8 +23,12 @@ > struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) > { > struct msm_gem_object *msm_obj = to_msm_bo(obj); > - BUG_ON(!msm_obj->sgt); /* should have already pinned! */ > - return msm_obj->sgt; > + int npages = obj->size >> PAGE_SHIFT; > + > + if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ > + return NULL; > + > + return drm_prime_pages_to_sg(msm_obj->pages, npages); > } > Brilliant diff, duplicating sgt fixed both issues! Care to fix this in mainline? -- regards, Stan -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html