Re: RFC: hardware accelerated bitblt using dma engine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05.08.2016 01:16, Enrico Weigelt, metux IT consult wrote:

<snip>
Seems I've been on a completely wrong path - what I'm looking
for is dma-buf. So my idea now goes like this:

* add a new 'virtual GPU' as render node.
* the basic operations are:
  -> create a virtual dumb framebuffer (just inside system memory),
  -> import dma-buf's as bo's
  -> blitting between bo's using dma-engine

That way, everything should be cleanly separated.

As the application needs to be aware of that buffer-and-blit approach
anyways (IOW: allocate two BO's and trigger the blitting when it done
rendering), the extra glue needed for opening and talking to the
render node should be quite minimal.


--mtx

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux