Re: [PATCH RFC 102/111] staging: etnaviv: separate GPU pipes from execution state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Dienstag, den 07.04.2015, 18:14 -0400 schrieb Rob Clark:
> On Tue, Apr 7, 2015 at 12:59 PM, Christian Gmeiner
> <christian.gmeiner@xxxxxxxxx> wrote:
> >>> And each Core(/FE) has its own device node. Does this make any sense?
> >>>
> >> And I don't get why each core needs to have a single device node. IMHO
> >> this is purely an implementation decision weather to have one device
> >> node for all cores or one device node per core.
> >>
> >
> > It is an important decision. And I think that one device node per core
> > reflects the
> > hardware design to 100%.
> >
> 
> Although I haven't really added support for devices with multiple
> pipe, the pipe param in msm ioctls is intended to deal with hw that
> has multiple pipes.  (And I assume someday adreno will sprout an extra
> compute pipe, where we'll need this.)
> 
> in your case, it sounds a bit like you should have an ioctl to
> enumerate the pipes, and a getcap that returns a bitmask of compute
> engine(s) supported by a given pipe.  Or something roughly like that.
> 
The current interface already allows for that. Each core get a simple
integer assigned. The userspace can then just ask for the feature bits
of a core with an increasing integer as index. The feature bits tell you
if the core is capable of executing 2D, 3D or VG pipe states.

Since we construct the DRM device only when all cores are probed and
tear it down when one of them goes away there are no holes in the index
space. So once you hit ENODEV when asking for the feature bits of a core
you know that there are no more cores to enumerate.

> >> For now I could only see that one device node per core makes things
> >> harder to get right, while I don't see a single benefit.
> >>
> >
> > What makes harder to get it right? The needed changes to the kernel
> > driver are not that
> > hard. The user space is an other story but thats because of the
> > render-only thing, where we
> > need to pass (prime) buffers around and do fence syncs etc. In the end
> > I do not see a
> > showstopper in the user space.
> 
> I assume the hw gives you a way to do fencing between pipes?  It seems
> at least convenient not to need to expose that via dmabuf+fence, since
> that is a bit heavyweight if you end up needing to do things like
> texture uploads/downloads or msaa resolve on one pipe synchronized to
> rendering happening on another..
> 
The cores are separate entities with no internal synchronization AFAIK.

Regards,
Lucas
-- 
Pengutronix e.K.             | Lucas Stach                 |
Industrial Linux Solutions   | http://www.pengutronix.de/  |

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel





[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux