Hi,
On 05-10-18 13:55, Mauro Carvalho Chehab wrote:
Em Mon, 1 Oct 2018 18:19:21 +0100
Dave Stevenson <dave.stevenson@xxxxxxxxxxxxxxx> escreveu:
Hi All,
On Mon, 1 Oct 2018 at 17:32, Ezequiel Garcia <ezequiel@xxxxxxxxxxxxx> wrote:
Hi Hans,
Thanks for looking into. I remember MJPEG vs. JPEG being a source
of confusion for me a few years ago, so clarification is greatly
welcome :-)
On Mon, 2018-10-01 at 15:03 +0300, Laurent Pinchart wrote:
Hi Hans,
On Monday, 1 October 2018 14:54:29 EEST Hans Verkuil wrote:
On 10/01/2018 01:48 PM, Laurent Pinchart wrote:
On Monday, 1 October 2018 11:43:04 EEST Hans Verkuil wrote:
It turns out that we have both JPEG and Motion-JPEG pixel formats
defined.
Furthermore, some drivers support one, some the other and some both.
These pixelformats both mean the same.
Do they ? I thought MJPEG was JPEG using fixed Huffman tables that were
not included in the JPEG headers.
I'm not aware of any difference. If there is one, then it is certainly not
documented.
What I can tell for sure is that many UVC devices don't include Huffman tables
in their JPEG headers.
Ezequiel, since you've been working with this recently, do you know anything
about this?
JPEG frames must include huffman and quantization tables, as per the standard.
AFAIK, there's no MJPEG specification per-se and vendors specify its own
way of conveying a Motion JPEG stream.
There is the specfication for MJPEG in Quicktime containers, which
defines the MJPEG-A and MJPEG-B variants [1].
MJPEG-B is not a concatenation of JPEG frames as the framing is
different, so can't really be combined into V4L2_PIX_FMT_JPEG.
Have people encountered devices that produce MJPEG-A or MJPEG-B via
V4L2? I haven't, but I have been forced to support both variants on
decode.
Checking it is not an easy task. I *suspect* that those cameras are all
MJPEG-A, as the libv4l decoder uses tinyjpeg library to handle both
JPEG and MJPEG.
Maybe Hans de Goede knows more about that, and may have actually tested
it with different camera models.
I've tested the JPG path in libv4l with quite a lot of cameras and
sofar it has worked for all of them. There are some non UVC cameras where
the hardware produces raw JPG data, but in that case the kernel driver
prefixes a JPG header to each frame so that it looks like a regular JPG.
Regards,
Hans
On that thought, whilst capture devices generally don't care, is there
a need to differentiate for M2M codec devices which can support
encoding the variants? Or likewise on M2M decoders that support only
JPEG, how do they tell userspace that they don't support MJPEG-A or
MJPEG-B?
Dave
[1] https://developer.apple.com/standards/qtff-2001.pdf
For instance, omiting the huffman table seems to be a vendor thing. Microsoft
explicitly omits the huffman tables from each frame:
https://www.fileformat.info/format/bmp/spec/b7c72ebab8064da48ae5ed0c053c67a4/view.htm
Others could be following the same things.
Like I mentioned before, Gstreamer always check for missing huffman table
and adds one if missing. Gstreamer has other quirks for missing markers,
e.g. deal with a missing EOI:
https://github.com/GStreamer/gst-plugins-good/commit/10ff3c8e14e8fba9e0a5d696dce0bea27de644d7
I think Hans suggestion of settling on JPEG makes sense and it would
be consistent with Gstreamer. Otherwise, we should specify exactly what we
understand by MJPEG, but I don't think it's worth it.
Thanks,
Ezequiel
Thanks,
Mauro