RE: DMA Engine API multiplanar support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vinod,

> -----Original Message-----
> From: Vinod Koul [mailto:vinod.koul@xxxxxxxxx]
> Sent: Monday, August 22, 2016 11:43 AM
> To: Radhey Shyam Pandey <radheys@xxxxxxxxxx>
> Cc: dmaengine@xxxxxxxxxxxxxxx; Hyun Kwon <hyunk@xxxxxxxxxx>; Appana
> Durga Kedareswara Rao <appanad@xxxxxxxxxx>; Laurent Pinchart
> (laurent.pinchart@xxxxxxxxxxxxxxxx) <laurent.pinchart@xxxxxxxxxxxxxxxx>
> Subject: Re: DMA Engine API multiplanar support
> 
> On Wed, Aug 03, 2016 at 10:05:38AM +0000, Radhey Shyam Pandey wrote:
> > Hi Vinod,
> >
> > >
> > > > I am planning to write DMA driver for new Xilinx VDMA IP (HLS
> > > > Based) and integrate it with Xilinx V4L2 capture pipeline.
> > >
> > > It should be update on existing VDMA driver then?
> > Yes initially we thought similar but in later design discussions we
> > came up with idea to add a new dma driver since there was no code
> > reusability/common code between two DMA IP's variant.
> 
> I would not be so sure about that!
> 
> > > > For single DMA read/write transaction hardware requires two
> > > > physically separate buffers for Y and CbCr components, which must
> > > > be placed in two different memory banks.  i.e DMA reads YUV
> > > > AXI-Stream for
> > > > source(DEV) and put it MEM into planar format.
> > > >
> > > >
> > > > V4L2_PIX_FMT_NV16 (Description)
> > > >
> > > > So I need some attribute in struct dma_channel to indicate that it
> > > > supports packed/planar format?
> > > >
> > > > Also while programming DMA we need to provide the base address of
> > > > Y plane and UV plane to hardware for one DMA transaction.  Current
> > > > dmaengine_prep_* implementations doesn't provide an interface to
> > > > program multiplanar channel address.
> > >
> > > Sounds like interleaved API could be used here?
> > interleaved DMA transfer template has src_start and  dst_start to
> > provide bus address of source/destination of first chunk.
> 
> Yes
> 
> > For multiplanar support requirement is to program multiple
> > source/destination buffer addr for single DMA transaction.
> 
> why cant a trxn from user be split to multiple dmanegine txn's?

Assuming we split frame transfer into two dma_async_tx_descriptor
(two callbacks)  Then we need some mechanism to pass multiplanar
information from client driver to dmaengine driver?
i.e dma programming sequence will be different for single/multiple
destination buffer address.

> 
> > One idea is to extend transfer template to provide array of
> > src/destination address. Any alternative solution to support this usecase?
> 
> >
> > Example:
> > Test Patter Generator / VIVID - > (YUV 4:2:2)    ->
> > Y will be written to one plane and UV to other plane i.e start0 and
> > start1 address.
> >
> > V4L2_PIX_FMT_NV16M 4 × 4 pixel image
> >
> > Byte Order. Each cell is one byte.
> >
> > start0 + 0:	Y'00	Y'01	Y'02	Y'03
> > start0 + 4:	Y'10	Y'11	Y'12	Y'13
> > start0 + 8:	Y'20	Y'21	Y'22	Y'23
> > start0 + 12:	Y'30	Y'31	Y'32	Y'33
> >
> > start1 + 0:	Cb00	Cr00	Cb02	Cr02
> > start1 + 4:	Cb10	Cr10	Cb12	Cr12
> > start1 + 8:	Cb20	Cr20	Cb22	Cr22
> > start1 + 12:	Cb30	Cr30	Cb32	Cr32
> 
> Sounds like you should actually do multiple txn's with dmanegine..
> 
> --
> ~Vinod
--
To unsubscribe from this list: send the line "unsubscribe dmaengine" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux PCI]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux