RE: abuse of dumb ioctls in exynos

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dave!

I guess I should have opened a discussion around armsoc a lot earlier
than now as you clearly have some frustrations! Sorry about that.

It also sounds like you have some ideas over how we should approach
the technical side and those I really want to understand.


> -----Original Message-----
> From: Dave Airlie [mailto:airlied@xxxxxxxxx]
> Sent: 23 April 2013 21:29
> To: Tom Cooksey
> Cc: dri-devel; Inki Dae
> Subject: Re: abuse of dumb ioctls in exynos
> 
> >
> > Having a flag to indicate a dumb buffer allocation is to be used as a
> > scan-out buffer would be useful for xf86-video-armsoc. We're trying to
> > keep that driver as generic as possible and currently the main device-
> > specific bits are what flags to pass to DRM_IOCTL_MODE_CREATE_DUMB for
> > scanout & non-scanout buffer allocations. If a generic scanout flag could
> > be added, it would simplify armsoc a fair bit and also allow the DRM
> > drivers we're using armsoc with to comply with the don't pass device-
> > specific flags to create dumb.
> >
> > For reference, the device-specific bits of armsoc are currently abstracted
> > here:
> >
> > Note: We are still using DRM_IOCTL_MODE_CREATE_DUMB to allocate pixmap
> > and DRI2 buffers and have not come across any issues with doing that.
> > Certainly both Mali-400 & Mali-T6xx render to linear RGBA buffers and
> > the display controller's in SoCs shipping Mali also seem to happily
> > scan-out linear RGB buffers. Getting armsoc to run on OMAP (again) might
> > need a device-specific allocation function to allocate the tiled format
> > used on OMAP, but only for efficient 90-degree rotations (if I understood
> > Rob correctly). So maybe we could also one day add a "this buffer will be
> > rotated 90 degrees" flag?
> 
> What part of don't use dumb buffer for acceleration is hard to understand?
> 
> Christ I called them DUMB. Lets try this again.
> 
> DON'T USE DUMB BUFFERS FOR ALLOCATING BUFFERS USED FOR ACCELERATION.

Right, I _think_ I understand your opinion on that. :-)

The reason we (currently) use the dumb buffer interface is because it
does pretty much exactly what we need it to, as we only want linear
RGB buffers:

On Mali & probably other tiled-based GPUs, the back buffer only gets
written once per frame, when the GPU writes its on-die tile buffer to
system memory. As such, we don't need the complicated memory layouts
immediate mode renders do to improve cache efficiency, etc.

What's more, the 2D hardware typically found on SoCs we're targeting
isn't advanced enough to implement all of the EXA operations and
frequently falls back to software rendering, which only works with
linear RGB buffers.

Another option we nearly went with is to use ION to allocate all
buffers, using the PRIME ioctls to import those buffers we want
to scanout into the display controller's DRM driver. ION's a pretty
good fit, but requires some SoC-specific logic in userspace to
figure out E.g. the display controller doesn't have an IOMMU and
we must therefore allocate from a contiguous ION heap. By allocating
via the DUMB interface and specifying a scanout hint, we can leave
that decision to the DRM driver and keep userspace entirely generic.
The other reason to go with DUMB rather than ION was because ION
wasn't upstream.


> Now that we've cleared that up, armsoc is a big bag of shit, I've
> spent a few hours on it in the last few weeks trying to get anything
> to run on my chromebook and really armsoc needs to be put out of its
> misery.

This is why we need a bug tracker! To objectively quantify "big bag
of shit" and fix it. :-)


> The only working long term strategy for ARM I see is to abstract the
> common modesetting code into a new library, 

Would you mind elaborating a little on this? I assume you're not talking
about libkms? What operations would be performed by this driver which
would need to be abstracted in userspace which aren't already nicely
abstracted by KMS? Once we have a new library of some description, I
assume you're suggesting we modify armsoc to use it? That seems a good
idea as it also means we can use that to implement the HWComposer HAL
on Android and thus use the same driver code can be used with minimal
changes on X11, Android, Wayland, Mir and whatever other new window
system comes along. That's really the point I'm trying to get to.


> and write a per-GPU
> driver.

So in our bit of the ARM ecosystem, the GPU is just the bit which
draws 3D graphics. The 2D drawing hardware is separate, as is the
display controller as is the video codec. This is reflected in the
driver model: The GPU driver is totally bespoke, the display
controller interface is DRM/KMS and the video codec is v4l2. There
doesn't appear to be a standard kernel interface for 2D draw
operations, so these seem to be added to DRM. We now have dma_buf
which lets us share buffers between those different drivers.

So by per-GPU driver, I assume you mean per-display-controller
driver, for which KMS is already a great abstraction.


> What you are doing now is madness and needs to stop.

Or at least change direction - once we've figured out a new one.


Cheers,

Tom




_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux