Re: [PATCH v2 2/2] media: imx: vdic: Introduce mem2mem VDI deinterlacer driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Marek,

On Mi, 2024-07-24 at 02:19 +0200, Marek Vasut wrote:
> Introduce dedicated memory-to-memory IPUv3 VDI deinterlacer driver.
> Currently the IPUv3 can operate VDI in DIRECT mode, from sensor to
> memory. This only works for single stream, that is, one input from
> one camera is deinterlaced on the fly with a helper buffer in DRAM
> and the result is written into memory.
> 
> The i.MX6Q/QP does support up to four analog cameras via two IPUv3
> instances, each containing one VDI deinterlacer block. In order to
> deinterlace all four streams from all four analog cameras live, it
> is necessary to operate VDI in INDIRECT mode, where the interlaced
> streams are written to buffers in memory, and then deinterlaced in
> memory using VDI in INDIRECT memory-to-memory mode.
> 
> This driver also makes use of the IDMAC->VDI->IC->IDMAC data path
> to provide pixel format conversion from input YUV formats to both
> output YUV or RGB formats. The later is useful in case the data
> are imported into the GPU, which on this platform cannot directly
> sample YUV buffers.
> 
> This is derived from previous work by Steve Longerbeam and from the
> IPUv3 CSC Scaler mem2mem driver.
> 
> Signed-off-by: Marek Vasut <marex@xxxxxxx>
> ---
> Cc: Daniel Vetter <daniel@xxxxxxxx>
> Cc: David Airlie <airlied@xxxxxxxxx>
> Cc: Fabio Estevam <festevam@xxxxxxxxx>
> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
> Cc: Helge Deller <deller@xxxxxx>
> Cc: Mauro Carvalho Chehab <mchehab@xxxxxxxxxx>
> Cc: Pengutronix Kernel Team <kernel@xxxxxxxxxxxxxx>
> Cc: Philipp Zabel <p.zabel@xxxxxxxxxxxxxx>
> Cc: Sascha Hauer <s.hauer@xxxxxxxxxxxxxx>
> Cc: Shawn Guo <shawnguo@xxxxxxxxxx>
> Cc: Steve Longerbeam <slongerbeam@xxxxxxxxx>
> Cc: dri-devel@xxxxxxxxxxxxxxxxxxxxx
> Cc: imx@xxxxxxxxxxxxxxx
> Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
> Cc: linux-fbdev@xxxxxxxxxxxxxxx
> Cc: linux-media@xxxxxxxxxxxxxxx
> Cc: linux-staging@xxxxxxxxxxxxxxx
> ---
> V2: - Add complementary imx_media_mem2mem_vdic_uninit()
>     - Drop uninitiaized ret from ipu_mem2mem_vdic_device_run()
>     - Drop duplicate nbuffers assignment in ipu_mem2mem_vdic_queue_setup()
>     - Fix %u formatting string in ipu_mem2mem_vdic_queue_setup()
>     - Drop devm_*free from ipu_mem2mem_vdic_get_ipu_resources() fail path
>       and ipu_mem2mem_vdic_put_ipu_resources()
>     - Add missing video_device_release()
> ---
>  drivers/staging/media/imx/Makefile            |   2 +-
>  drivers/staging/media/imx/imx-media-dev.c     |  55 +
>  .../media/imx/imx-media-mem2mem-vdic.c        | 997 ++++++++++++++++++
>  drivers/staging/media/imx/imx-media.h         |  10 +
>  4 files changed, 1063 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/staging/media/imx/imx-media-mem2mem-vdic.c
> 
> diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
> index 330e0825f506b..0cad87123b590 100644
> --- a/drivers/staging/media/imx/Makefile
> +++ b/drivers/staging/media/imx/Makefile
> @@ -4,7 +4,7 @@ imx-media-common-objs := imx-media-capture.o imx-media-dev-common.o \
>  
>  imx6-media-objs := imx-media-dev.o imx-media-internal-sd.o \
>  	imx-ic-common.o imx-ic-prp.o imx-ic-prpencvf.o imx-media-vdic.o \
> -	imx-media-csc-scaler.o
> +	imx-media-mem2mem-vdic.o imx-media-csc-scaler.o
>  
>  imx6-media-csi-objs := imx-media-csi.o imx-media-fim.o
>  
> diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c
> index be54dca11465d..a841fdb4c2394 100644
> --- a/drivers/staging/media/imx/imx-media-dev.c
> +++ b/drivers/staging/media/imx/imx-media-dev.c
> @@ -57,7 +57,52 @@ static int imx6_media_probe_complete(struct v4l2_async_notifier *notifier)
>  		goto unlock;
>  	}
>  
> +	imxmd->m2m_vdic[0] = imx_media_mem2mem_vdic_init(imxmd, 0);
> +	if (IS_ERR(imxmd->m2m_vdic[0])) {
> +		ret = PTR_ERR(imxmd->m2m_vdic[0]);
> +		imxmd->m2m_vdic[0] = NULL;
> +		goto unlock;
> +	}
> +
> +	/* MX6S/DL has one IPUv3, init second VDI only on MX6Q/QP */
> +	if (imxmd->ipu[1]) {
> +		imxmd->m2m_vdic[1] = imx_media_mem2mem_vdic_init(imxmd, 1);
> +		if (IS_ERR(imxmd->m2m_vdic[1])) {
> +			ret = PTR_ERR(imxmd->m2m_vdic[1]);
> +			imxmd->m2m_vdic[1] = NULL;
> +			goto uninit_vdi0;
> +		}
> +	}

Instead of presenting two devices to userspace, it would be better to
have a single video device that can distribute work to both IPUs.
To be fair, we never implemented that for the CSC/scaler mem2mem device
either.

> +
>  	ret = imx_media_csc_scaler_device_register(imxmd->m2m_vdev);
> +	if (ret)
> +		goto uninit_vdi1;
> +
> +	ret = imx_media_mem2mem_vdic_register(imxmd->m2m_vdic[0]);
> +	if (ret)
> +		goto unreg_csc;
> +
> +	/* MX6S/DL has one IPUv3, init second VDI only on MX6Q/QP */
> +	if (imxmd->ipu[1]) {
> +		ret = imx_media_mem2mem_vdic_register(imxmd->m2m_vdic[1]);
> +		if (ret)
> +			goto unreg_vdic;
> +	}
> +
> +	mutex_unlock(&imxmd->mutex);
> +	return ret;
> +
> +unreg_vdic:
> +	imx_media_mem2mem_vdic_unregister(imxmd->m2m_vdic[0]);
> +	imxmd->m2m_vdic[0] = NULL;
> +unreg_csc:
> +	imx_media_csc_scaler_device_unregister(imxmd->m2m_vdev);
> +	imxmd->m2m_vdev = NULL;
> +uninit_vdi1:
> +	if (imxmd->ipu[1])
> +		imx_media_mem2mem_vdic_uninit(imxmd->m2m_vdic[1]);
> +uninit_vdi0:
> +	imx_media_mem2mem_vdic_uninit(imxmd->m2m_vdic[0]);
>  unlock:
>  	mutex_unlock(&imxmd->mutex);
>  	return ret;
> @@ -108,6 +153,16 @@ static void imx_media_remove(struct platform_device *pdev)
>  
>  	v4l2_info(&imxmd->v4l2_dev, "Removing imx-media\n");
>  
> +	if (imxmd->m2m_vdic[1]) {	/* MX6Q/QP only */
> +		imx_media_mem2mem_vdic_unregister(imxmd->m2m_vdic[1]);
> +		imxmd->m2m_vdic[1] = NULL;
> +	}
> +
> +	if (imxmd->m2m_vdic[0]) {
> +		imx_media_mem2mem_vdic_unregister(imxmd->m2m_vdic[0]);
> +		imxmd->m2m_vdic[0] = NULL;
> +	}
> +
>  	if (imxmd->m2m_vdev) {
>  		imx_media_csc_scaler_device_unregister(imxmd->m2m_vdev);
>  		imxmd->m2m_vdev = NULL;
> diff --git a/drivers/staging/media/imx/imx-media-mem2mem-vdic.c b/drivers/staging/media/imx/imx-media-mem2mem-vdic.c
> new file mode 100644
> index 0000000000000..71c6c023d2bf8
> --- /dev/null
> +++ b/drivers/staging/media/imx/imx-media-mem2mem-vdic.c
> @@ -0,0 +1,997 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * i.MX VDIC mem2mem de-interlace driver
> + *
> + * Copyright (c) 2024 Marek Vasut <marex@xxxxxxx>
> + *
> + * Based on previous VDIC mem2mem work by Steve Longerbeam that is:
> + * Copyright (c) 2018 Mentor Graphics Inc.
> + */
> +
> +#include <linux/delay.h>
> +#include <linux/fs.h>
> +#include <linux/module.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/version.h>
> +
> +#include <media/media-device.h>
> +#include <media/v4l2-ctrls.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-event.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/v4l2-mem2mem.h>
> +#include <media/videobuf2-dma-contig.h>
> +
> +#include "imx-media.h"
> +
> +#define fh_to_ctx(__fh)	container_of(__fh, struct ipu_mem2mem_vdic_ctx, fh)
> +
> +#define to_mem2mem_priv(v) container_of(v, struct ipu_mem2mem_vdic_priv, vdev)

These could be inline functions for added type safety.

> +
> +enum {
> +	V4L2_M2M_SRC = 0,
> +	V4L2_M2M_DST = 1,
> +};
> +
> +struct ipu_mem2mem_vdic_ctx;
> +
> +struct ipu_mem2mem_vdic_priv {
> +	struct imx_media_video_dev	vdev;
> +	struct imx_media_dev		*md;
> +	struct device			*dev;
> +	struct ipu_soc			*ipu_dev;
> +	int				ipu_id;
> +
> +	struct v4l2_m2m_dev		*m2m_dev;
> +	struct mutex			mutex;		/* mem2mem device mutex */
> +
> +	/* VDI resources */
> +	struct ipu_vdi			*vdi;
> +	struct ipu_ic			*ic;
> +	struct ipuv3_channel		*vdi_in_ch_p;
> +	struct ipuv3_channel		*vdi_in_ch;
> +	struct ipuv3_channel		*vdi_in_ch_n;
> +	struct ipuv3_channel		*vdi_out_ch;
> +	int				eof_irq;
> +	int				nfb4eof_irq;
> +	spinlock_t			irqlock;	/* protect eof_irq handler */
> +
> +	atomic_t			stream_count;
> +
> +	struct ipu_mem2mem_vdic_ctx	*curr_ctx;
> +
> +	struct v4l2_pix_format		fmt[2];
> +};
> +
> +struct ipu_mem2mem_vdic_ctx {
> +	struct ipu_mem2mem_vdic_priv	*priv;
> +	struct v4l2_fh			fh;
> +	unsigned int			sequence;
> +	struct vb2_v4l2_buffer		*prev_buf;
> +	struct vb2_v4l2_buffer		*curr_buf;
> +};
> +
> +static struct v4l2_pix_format *
> +ipu_mem2mem_vdic_get_format(struct ipu_mem2mem_vdic_priv *priv,
> +			    enum v4l2_buf_type type)
> +{
> +	return &priv->fmt[V4L2_TYPE_IS_OUTPUT(type) ? V4L2_M2M_SRC : V4L2_M2M_DST];
> +}
> +
> +static bool ipu_mem2mem_vdic_format_is_yuv420(const u32 pixelformat)
> +{
> +	/* All 4:2:0 subsampled formats supported by this hardware */
> +	return pixelformat == V4L2_PIX_FMT_YUV420 ||
> +	       pixelformat == V4L2_PIX_FMT_YVU420 ||
> +	       pixelformat == V4L2_PIX_FMT_NV12;
> +}
> +
> +static bool ipu_mem2mem_vdic_format_is_yuv422(const u32 pixelformat)
> +{
> +	/* All 4:2:2 subsampled formats supported by this hardware */
> +	return pixelformat == V4L2_PIX_FMT_UYVY ||
> +	       pixelformat == V4L2_PIX_FMT_YUYV ||
> +	       pixelformat == V4L2_PIX_FMT_YUV422P ||
> +	       pixelformat == V4L2_PIX_FMT_NV16;
> +}
> +
> +static bool ipu_mem2mem_vdic_format_is_yuv(const u32 pixelformat)
> +{
> +	return ipu_mem2mem_vdic_format_is_yuv420(pixelformat) ||
> +	       ipu_mem2mem_vdic_format_is_yuv422(pixelformat);
> +}
> +
> +static bool ipu_mem2mem_vdic_format_is_rgb16(const u32 pixelformat)
> +{
> +	/* All 16-bit RGB formats supported by this hardware */
> +	return pixelformat == V4L2_PIX_FMT_RGB565;
> +}
> +
> +static bool ipu_mem2mem_vdic_format_is_rgb24(const u32 pixelformat)
> +{
> +	/* All 24-bit RGB formats supported by this hardware */
> +	return pixelformat == V4L2_PIX_FMT_RGB24 ||
> +	       pixelformat == V4L2_PIX_FMT_BGR24;
> +}
> +
> +static bool ipu_mem2mem_vdic_format_is_rgb32(const u32 pixelformat)
> +{
> +	/* All 32-bit RGB formats supported by this hardware */
> +	return pixelformat == V4L2_PIX_FMT_XRGB32 ||
> +	       pixelformat == V4L2_PIX_FMT_XBGR32 ||
> +	       pixelformat == V4L2_PIX_FMT_BGRX32 ||
> +	       pixelformat == V4L2_PIX_FMT_RGBX32;
> +}
> +
> +/*
> + * mem2mem callbacks
> + */
> +static irqreturn_t ipu_mem2mem_vdic_eof_interrupt(int irq, void *dev_id)
> +{
> +	struct ipu_mem2mem_vdic_priv *priv = dev_id;
> +	struct ipu_mem2mem_vdic_ctx *ctx = priv->curr_ctx;
> +	struct vb2_v4l2_buffer *src_buf, *dst_buf;
> +
> +	spin_lock(&priv->irqlock);
> +
> +	src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> +	dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
> +
> +	v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, true);
> +
> +	src_buf->sequence = ctx->sequence++;
> +	dst_buf->sequence = src_buf->sequence;
> +
> +	v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_DONE);
> +	v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_DONE);
> +
> +	v4l2_m2m_job_finish(priv->m2m_dev, ctx->fh.m2m_ctx);
> +
> +	spin_unlock(&priv->irqlock);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t ipu_mem2mem_vdic_nfb4eof_interrupt(int irq, void *dev_id)
> +{
> +	struct ipu_mem2mem_vdic_priv *priv = dev_id;
> +
> +	/* That is about all we can do about it, report it. */
> +	dev_warn_ratelimited(priv->dev, "NFB4EOF error interrupt occurred\n");
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static void ipu_mem2mem_vdic_device_run(void *_ctx)
> +{
> +	struct ipu_mem2mem_vdic_ctx *ctx = _ctx;
> +	struct ipu_mem2mem_vdic_priv *priv = ctx->priv;
> +	struct vb2_v4l2_buffer *curr_buf, *dst_buf;
> +	dma_addr_t prev_phys, curr_phys, out_phys;
> +	struct v4l2_pix_format *infmt;
> +	u32 phys_offset = 0;
> +	unsigned long flags;
> +
> +	infmt = ipu_mem2mem_vdic_get_format(priv, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> +	if (V4L2_FIELD_IS_SEQUENTIAL(infmt->field))
> +		phys_offset = infmt->sizeimage / 2;
> +	else if (V4L2_FIELD_IS_INTERLACED(infmt->field))
> +		phys_offset = infmt->bytesperline;
> +	else
> +		dev_err(priv->dev, "Invalid field %d\n", infmt->field);
> +
> +	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
> +	out_phys = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
> +
> +	curr_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> +	if (!curr_buf) {
> +		dev_err(priv->dev, "Not enough buffers\n");
> +		return;
> +	}
> +
> +	spin_lock_irqsave(&priv->irqlock, flags);
> +
> +	if (ctx->curr_buf) {
> +		ctx->prev_buf = ctx->curr_buf;
> +		ctx->curr_buf = curr_buf;
> +	} else {
> +		ctx->prev_buf = curr_buf;
> +		ctx->curr_buf = curr_buf;
> +		dev_warn(priv->dev, "Single-buffer mode, fix your userspace\n");
> +	}
> +
> +	prev_phys = vb2_dma_contig_plane_dma_addr(&ctx->prev_buf->vb2_buf, 0);
> +	curr_phys = vb2_dma_contig_plane_dma_addr(&ctx->curr_buf->vb2_buf, 0);
> +
> +	priv->curr_ctx = ctx;
> +	spin_unlock_irqrestore(&priv->irqlock, flags);
> +
> +	ipu_cpmem_set_buffer(priv->vdi_out_ch,  0, out_phys);
> +	ipu_cpmem_set_buffer(priv->vdi_in_ch_p, 0, prev_phys + phys_offset);
> +	ipu_cpmem_set_buffer(priv->vdi_in_ch,   0, curr_phys);
> +	ipu_cpmem_set_buffer(priv->vdi_in_ch_n, 0, curr_phys + phys_offset);

This always outputs at a frame rate of half the field rate, and only
top fields are ever used as current field, and bottom fields as
previous/next fields, right?

I think it would be good to add a mode that doesn't drop the

	ipu_cpmem_set_buffer(priv->vdi_in_ch_p, 0, prev_phys);
	ipu_cpmem_set_buffer(priv->vdi_in_ch,   0, prev_phys + phys_offset);
	ipu_cpmem_set_buffer(priv->vdi_in_ch_n, 0, curr_phys);

output frames, right from the start.

If we don't start with that supported, I fear userspace will make
assumptions and be surprised when a full rate mode is added later.

> +
> +	/* No double buffering, always pick buffer 0 */
> +	ipu_idmac_select_buffer(priv->vdi_out_ch, 0);
> +	ipu_idmac_select_buffer(priv->vdi_in_ch_p, 0);
> +	ipu_idmac_select_buffer(priv->vdi_in_ch, 0);
> +	ipu_idmac_select_buffer(priv->vdi_in_ch_n, 0);
> +
> +	/* Enable the channels */
> +	ipu_idmac_enable_channel(priv->vdi_out_ch);
> +	ipu_idmac_enable_channel(priv->vdi_in_ch_p);
> +	ipu_idmac_enable_channel(priv->vdi_in_ch);
> +	ipu_idmac_enable_channel(priv->vdi_in_ch_n);
> +}
> +
> +/*
> + * Video ioctls
> + */
> +static int ipu_mem2mem_vdic_querycap(struct file *file, void *priv,
> +				     struct v4l2_capability *cap)
> +{
> +	strscpy(cap->driver, "imx-m2m-vdic", sizeof(cap->driver));
> +	strscpy(cap->card, "imx-m2m-vdic", sizeof(cap->card));
> +	strscpy(cap->bus_info, "platform:imx-m2m-vdic", sizeof(cap->bus_info));
> +	cap->device_caps = V4L2_CAP_VIDEO_M2M | V4L2_CAP_STREAMING;
> +	cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
> +
> +	return 0;
> +}
> +
> +static int ipu_mem2mem_vdic_enum_fmt(struct file *file, void *fh, struct v4l2_fmtdesc *f)
> +{
> +	struct ipu_mem2mem_vdic_ctx *ctx = fh_to_ctx(fh);
> +	struct vb2_queue *vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
> +	enum imx_pixfmt_sel cs = vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE ?
> +				 PIXFMT_SEL_YUV_RGB : PIXFMT_SEL_YUV;
> +	u32 fourcc;
> +	int ret;
> +
> +	ret = imx_media_enum_pixel_formats(&fourcc, f->index, cs, 0);
> +	if (ret)
> +		return ret;
> +
> +	f->pixelformat = fourcc;
> +
> +	return 0;
> +}
> +
> +static int ipu_mem2mem_vdic_g_fmt(struct file *file, void *fh, struct v4l2_format *f)
> +{
> +	struct ipu_mem2mem_vdic_ctx *ctx = fh_to_ctx(fh);
> +	struct ipu_mem2mem_vdic_priv *priv = ctx->priv;
> +	struct v4l2_pix_format *fmt = ipu_mem2mem_vdic_get_format(priv, f->type);
> +
> +	f->fmt.pix = *fmt;
> +
> +	return 0;
> +}
> +
> +static int ipu_mem2mem_vdic_try_fmt(struct file *file, void *fh,
> +				    struct v4l2_format *f)
> +{
> +	const struct imx_media_pixfmt *cc;
> +	enum imx_pixfmt_sel cs;
> +	u32 fourcc;
> +
> +	if (f->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {	/* Output */
> +		cs = PIXFMT_SEL_YUV_RGB;	/* YUV direct / RGB via IC */
> +
> +		f->fmt.pix.field = V4L2_FIELD_NONE;
> +	} else {
> +		cs = PIXFMT_SEL_YUV;		/* YUV input only */
> +
> +		/*
> +		 * Input must be interlaced with frame order.
> +		 * Fall back to SEQ_TB otherwise.
> +		 */
> +		if (!V4L2_FIELD_HAS_BOTH(f->fmt.pix.field) ||
> +		    f->fmt.pix.field == V4L2_FIELD_INTERLACED)
> +			f->fmt.pix.field = V4L2_FIELD_SEQ_TB;
> +	}
> +
> +	fourcc = f->fmt.pix.pixelformat;
> +	cc = imx_media_find_pixel_format(fourcc, cs);
> +	if (!cc) {
> +		imx_media_enum_pixel_formats(&fourcc, 0, cs, 0);
> +		cc = imx_media_find_pixel_format(fourcc, cs);
> +	}
> +
> +	f->fmt.pix.pixelformat = cc->fourcc;
> +
> +	v4l_bound_align_image(&f->fmt.pix.width,
> +			      1, 968, 1,
> +			      &f->fmt.pix.height,
> +			      1, 1024, 1, 1);
> +
> +	if (ipu_mem2mem_vdic_format_is_yuv420(f->fmt.pix.pixelformat))
> +		f->fmt.pix.bytesperline = f->fmt.pix.width * 3 / 2;
> +	else if (ipu_mem2mem_vdic_format_is_yuv422(f->fmt.pix.pixelformat))
> +		f->fmt.pix.bytesperline = f->fmt.pix.width * 2;
> +	else if (ipu_mem2mem_vdic_format_is_rgb16(f->fmt.pix.pixelformat))
> +		f->fmt.pix.bytesperline = f->fmt.pix.width * 2;
> +	else if (ipu_mem2mem_vdic_format_is_rgb24(f->fmt.pix.pixelformat))
> +		f->fmt.pix.bytesperline = f->fmt.pix.width * 3;
> +	else if (ipu_mem2mem_vdic_format_is_rgb32(f->fmt.pix.pixelformat))
> +		f->fmt.pix.bytesperline = f->fmt.pix.width * 4;
> +	else
> +		f->fmt.pix.bytesperline = f->fmt.pix.width;
> +
> +	f->fmt.pix.sizeimage = f->fmt.pix.height * f->fmt.pix.bytesperline;
> +
> +	return 0;
> +}
> +
> +static int ipu_mem2mem_vdic_s_fmt(struct file *file, void *fh, struct v4l2_format *f)
> +{
> +	struct ipu_mem2mem_vdic_ctx *ctx = fh_to_ctx(fh);
> +	struct ipu_mem2mem_vdic_priv *priv = ctx->priv;
> +	struct v4l2_pix_format *fmt, *infmt, *outfmt;
> +	struct vb2_queue *vq;
> +	int ret;
> +
> +	vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
> +	if (vb2_is_busy(vq)) {
> +		dev_err(priv->dev, "%s queue busy\n",  __func__);
> +		return -EBUSY;
> +	}
> +
> +	ret = ipu_mem2mem_vdic_try_fmt(file, fh, f);
> +	if (ret < 0)
> +		return ret;
> +
> +	fmt = ipu_mem2mem_vdic_get_format(priv, f->type);
> +	*fmt = f->fmt.pix;
> +
> +	/* Propagate colorimetry to the capture queue */
> +	infmt = ipu_mem2mem_vdic_get_format(priv, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> +	outfmt = ipu_mem2mem_vdic_get_format(priv, V4L2_BUF_TYPE_VIDEO_CAPTURE);
> +	outfmt->colorspace = infmt->colorspace;
> +	outfmt->ycbcr_enc = infmt->ycbcr_enc;
> +	outfmt->xfer_func = infmt->xfer_func;
> +	outfmt->quantization = infmt->quantization;
> +
> +	return 0;
> +}
> +
> +static const struct v4l2_ioctl_ops mem2mem_ioctl_ops = {
> +	.vidioc_querycap		= ipu_mem2mem_vdic_querycap,
> +
> +	.vidioc_enum_fmt_vid_cap	= ipu_mem2mem_vdic_enum_fmt,
> +	.vidioc_g_fmt_vid_cap		= ipu_mem2mem_vdic_g_fmt,
> +	.vidioc_try_fmt_vid_cap		= ipu_mem2mem_vdic_try_fmt,
> +	.vidioc_s_fmt_vid_cap		= ipu_mem2mem_vdic_s_fmt,
> +
> +	.vidioc_enum_fmt_vid_out	= ipu_mem2mem_vdic_enum_fmt,
> +	.vidioc_g_fmt_vid_out		= ipu_mem2mem_vdic_g_fmt,
> +	.vidioc_try_fmt_vid_out		= ipu_mem2mem_vdic_try_fmt,
> +	.vidioc_s_fmt_vid_out		= ipu_mem2mem_vdic_s_fmt,
> +
> +	.vidioc_reqbufs			= v4l2_m2m_ioctl_reqbufs,
> +	.vidioc_querybuf		= v4l2_m2m_ioctl_querybuf,
> +
> +	.vidioc_qbuf			= v4l2_m2m_ioctl_qbuf,
> +	.vidioc_expbuf			= v4l2_m2m_ioctl_expbuf,
> +	.vidioc_dqbuf			= v4l2_m2m_ioctl_dqbuf,
> +	.vidioc_create_bufs		= v4l2_m2m_ioctl_create_bufs,
> +
> +	.vidioc_streamon		= v4l2_m2m_ioctl_streamon,
> +	.vidioc_streamoff		= v4l2_m2m_ioctl_streamoff,
> +
> +	.vidioc_subscribe_event		= v4l2_ctrl_subscribe_event,
> +	.vidioc_unsubscribe_event	= v4l2_event_unsubscribe,
> +};
> +
> +/*
> + * Queue operations
> + */
> +static int ipu_mem2mem_vdic_queue_setup(struct vb2_queue *vq, unsigned int *nbuffers,
> +					unsigned int *nplanes, unsigned int sizes[],
> +					struct device *alloc_devs[])
> +{
> +	struct ipu_mem2mem_vdic_ctx *ctx = vb2_get_drv_priv(vq);
> +	struct ipu_mem2mem_vdic_priv *priv = ctx->priv;
> +	struct v4l2_pix_format *fmt = ipu_mem2mem_vdic_get_format(priv, vq->type);
> +	unsigned int count = *nbuffers;
> +
> +	if (*nplanes)
> +		return sizes[0] < fmt->sizeimage ? -EINVAL : 0;
> +
> +	*nplanes = 1;
> +	sizes[0] = fmt->sizeimage;
> +
> +	dev_dbg(ctx->priv->dev, "get %u buffer(s) of size %d each.\n",
> +		count, fmt->sizeimage);
> +
> +	return 0;
> +}
> +
> +static int ipu_mem2mem_vdic_buf_prepare(struct vb2_buffer *vb)
> +{
> +	struct ipu_mem2mem_vdic_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
> +	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> +	struct ipu_mem2mem_vdic_priv *priv = ctx->priv;
> +	struct vb2_queue *vq = vb->vb2_queue;
> +	struct v4l2_pix_format *fmt;
> +	unsigned long size;
> +
> +	dev_dbg(ctx->priv->dev, "type: %d\n", vb->vb2_queue->type);
> +
> +	if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
> +		if (vbuf->field == V4L2_FIELD_ANY)
> +			vbuf->field = V4L2_FIELD_SEQ_TB;
> +		if (!V4L2_FIELD_HAS_BOTH(vbuf->field)) {
> +			dev_dbg(ctx->priv->dev, "%s: field isn't supported\n",
> +				__func__);
> +			return -EINVAL;
> +		}
> +	}
> +
> +	fmt = ipu_mem2mem_vdic_get_format(priv, vb->vb2_queue->type);
> +	size = fmt->sizeimage;
> +
> +	if (vb2_plane_size(vb, 0) < size) {
> +		dev_dbg(ctx->priv->dev,
> +			"%s: data will not fit into plane (%lu < %lu)\n",
> +			__func__, vb2_plane_size(vb, 0), size);
> +		return -EINVAL;
> +	}
> +
> +	vb2_set_plane_payload(vb, 0, fmt->sizeimage);
> +
> +	return 0;
> +}
> +
> +static void ipu_mem2mem_vdic_buf_queue(struct vb2_buffer *vb)
> +{
> +	struct ipu_mem2mem_vdic_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
> +
> +	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, to_vb2_v4l2_buffer(vb));
> +}
> +
> +/* VDIC hardware setup */
> +static int ipu_mem2mem_vdic_setup_channel(struct ipu_mem2mem_vdic_priv *priv,
> +					  struct ipuv3_channel *channel,
> +					  struct v4l2_pix_format *fmt,
> +					  bool in)
> +{
> +	struct ipu_image image = { 0 };
> +	unsigned int burst_size;
> +	int ret;
> +
> +	image.pix = *fmt;
> +	image.rect.width = image.pix.width;
> +	image.rect.height = image.pix.height;
> +
> +	ipu_cpmem_zero(channel);
> +
> +	if (in) {
> +		/* One field to VDIC channels */
> +		image.pix.height /= 2;
> +		image.rect.height /= 2;
> +	} else {
> +		/* Skip writing U and V components to odd rows */
> +		if (ipu_mem2mem_vdic_format_is_yuv420(image.pix.pixelformat))
> +			ipu_cpmem_skip_odd_chroma_rows(channel);
> +	}
> +
> +	ret = ipu_cpmem_set_image(channel, &image);
> +	if (ret)
> +		return ret;
> +
> +	burst_size = (image.pix.width & 0xf) ? 8 : 16;
> +	ipu_cpmem_set_burstsize(channel, burst_size);
> +
> +	if (!ipu_prg_present(priv->ipu_dev))
> +		ipu_cpmem_set_axi_id(channel, 1);
> +
> +	ipu_idmac_set_double_buffer(channel, false);
> +
> +	return 0;
> +}
> +
> +static int ipu_mem2mem_vdic_setup_hardware(struct ipu_mem2mem_vdic_priv *priv)
> +{
> +	struct v4l2_pix_format *infmt, *outfmt;
> +	struct ipu_ic_csc csc;
> +	bool in422, outyuv;
> +	int ret;
> +
> +	infmt = ipu_mem2mem_vdic_get_format(priv, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> +	outfmt = ipu_mem2mem_vdic_get_format(priv, V4L2_BUF_TYPE_VIDEO_CAPTURE);
> +	in422 = ipu_mem2mem_vdic_format_is_yuv422(infmt->pixelformat);
> +	outyuv = ipu_mem2mem_vdic_format_is_yuv(outfmt->pixelformat);
> +
> +	ipu_vdi_setup(priv->vdi, in422, infmt->width, infmt->height);
> +	ipu_vdi_set_field_order(priv->vdi, V4L2_STD_UNKNOWN, infmt->field);
> +	ipu_vdi_set_motion(priv->vdi, HIGH_MOTION);

This maps to VDI_C_MOT_SEL_FULL aka VDI_MOT_SEL=2, which is documented
as "full motion, only vertical filter is used". Doesn't this completely
ignore the previous/next fields and only use the output of the di_vfilt
four tap vertical filter block to fill in missing lines from the
surrounding pixels (above and below) of the current field?

I think this should at least be configurable, and probably default to
MED_MOTION.

regards
Philipp






[Index of Archives]     [Video for Linux]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Tourism]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux