On 04/07/2014 03:11 PM, Hans Verkuil wrote: > From: Hans Verkuil <hans.verkuil@xxxxxxxxx> > > The videobuf2-core did not zero the 'planes' array in __qbuf_userptr() > and __qbuf_dmabuf(). That's now memset to 0. Without this the reserved > array in struct v4l2_plane would be non-zero, causing v4l2-compliance > errors. > > More serious is the fact that data_offset was not handled correctly: > > - for capture devices it was never zeroed, which meant that it was > uninitialized. Unless the driver sets it it was a completely random > number. With the memset above this is now fixed. > > - __qbuf_dmabuf had a completely incorrect length check that included > data_offset. Hi Hans, I may understand it wrongly but IMO allowing non-zero data offset simplifies buffer sharing using dmabuf. I remember a problem that occurred when someone wanted to use a single dmabuf with multiplanar API. For example, MFC shares a buffer with DRM. Assume that DRM device forces the whole image to be located in one dmabuf. The MFC uses multiplanar API therefore application must use the same dmabuf to describe luma and chroma planes. It is intuitive to use the same dmabuf for both planes and data_offset=0 for luma plane and data_offset = luma_size for chroma offset. The check: > - if (planes[plane].length < planes[plane].data_offset + > - q->plane_sizes[plane]) { assured that the logical plane does not overflow the dmabuf. Am I wrong? Regards, Tomasz Stanislawski > > - in __fill_vb2_buffer in the DMABUF case the data_offset field was > unconditionally copied from v4l2_buffer to v4l2_plane when this > should only happen in the output case. > > - in the single-planar case data_offset was never correctly set to 0. > The single-planar API doesn't support data_offset, so setting it > to 0 is the right thing to do. This too is now solved by the memset. > > All these issues were found with v4l2-compliance. > > Signed-off-by: Hans Verkuil <hans.verkuil@xxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html