How to Integrate camera, mpeg decoder, colorspace conversion hardware into linux?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm looking for pointers on how to properly integrate the following
SoC functions into
Linux (media).

The SoC in question has the following components:
- camera interface
- MPEG2/4 HW decoder
- yuv->rgb conversion and scaling engine

I have prototype code which captures 2 interlaced fields from the
camera interface,
constructs a full frame in system memory using the chips DMA engine and finally
uses the scaler to convert the yuv data to rgb and scale it up to full
display resulution;
however it is one monolithic driver and one has to unload it to use
the mpeg decoder
(which also uses the scaler).

The hardware blocks work independently from each other, the scaler is
usually used
to DMA its output into one of four framebuffer windows.

The goal is to have mplayer capture analog video from the camera
interface and/or
have it feed digital video (DVB) through the mpeg decoder and display
it. Additionally,
since the scaler is indepedent from the rest, mplayer should be able
to at least use
the colorspace converter and decode other video formats in software.

Are there any drivers in the kernel already which implement this kind of scheme?

Thanks!
        Manuel Lauss
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux