Am 04.01.2017 um 16:47 schrieb Rob Clark:
On Wed, Jan 4, 2017 at 9:54 AM, Daniel Vetter <daniel@xxxxxxxx> wrote:
On Wed, Jan 04, 2017 at 08:06:24AM -0500, Rob Clark wrote:
On Wed, Jan 4, 2017 at 7:03 AM, Daniel Stone <daniel@xxxxxxxxxxxxx> wrote:
Speaking of compression for display, especially the separate
compression buffer: That should be fully contained in the main DMABUF
and described by the per-BO metadata. Some other drivers want to use a
separate DMABUF for the compression buffer - while that may sound good
in theory, it's not economical for the reason described above.
'Some other drivers want to use a separate DMABUF', or 'some other
hardware demands the data be separate'. Same with luma/chroma plane
separation. Anyway, it doesn't really matter unless you're sharing
render-compression formats across vendors, and AFBC is the only case
of that I know of currently.
jfwiw, UBWC on newer snapdragons too.. seems like we can share these
not just between gpu (render to and sample from) and display, but also
v4l2 decoder/encoder (and maybe camera?)
I *think* we probably can treat the metadata buffers as a separate
plane.. at least we can for render target and blit src/dst, but not
100% sure about sampling from a UBWC buffer.. that might force us to
have them in a single buffer.
Conceptually treating them as two planes, and everywhere requiring that
they're allocated from the same BO are orthogonal things. At least that's
our plan with intel render compression last time I understood the current
state ;-)
If the position of the different parts of the buffer are somewhere
required to be a function of w/h/bpp/etc then I'm not sure if there is
a strong advantage to treating them as separate BOs.. although I
suppose it doesn't preclude it either. As far as plumbing it through
mesa/st, it seems convenient to have a single buffer. (We have kind
of a hack to deal w/ multi-planar yuv, but I'd rather not propagate
that.. but I've not thought through those details so much yet.)
Well I don't want to ruin your day, but there are different requirements
from different hardware.
For example the UVD engine found in all AMD graphics cards since r600
must have both planes in a single BO because the memory controller can
only handle a rather small offset between the planes.
On the other hand I know of embedded MPEG2/H264 decoders where the
different planes must be on different memory channels. In this case I
can imagine that you want one BO for each plane, because otherwise the
device must stitch together one buffer object from two different memory
regions (of course possible, but rather ugly).
So if we want to cover everything we essentially need to support all
variants of one plane per BO as well as all planes in one BO with
DMA-Buf. A bit tricky isn't it?
Regards,
Christian.
BR,
-R
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel