[RFC] Qualcomm 2D/3D graphics driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For about a year and a half, the Qualcomm Linux team has been working to support
the OpenGL ES 3D core in the Snapdragon processor.  The hardware made its debut
in the Nexus One and has subsequently been used in a few other commercial
products since then.  To support the 3D GPU we wrote a kernel based driver to
manage all the usual graphics concerns - interrupts, command streams, context
switching, etc, etc.  You can see the latest and greatest incarnation
of our driver here:

https://www.codeaurora.org/gitweb/quic/la/?p=kernel/msm.git;a=tree;f=drivers/gpu/msm;h=dcacc55d835348f71454784c0e32b819424af4be;hb=refs/heads/android-msm-2.6.32

I'm writing this email because we think it is high time that we get off the
bench, into the game and push support for the Qualcomm graphics cores to the
mainline kernel. We are looking for advice and comment from the community on
the approach we have taken and what steps we might need to take, if any, to
modify the driver so it can be accepted.  I'm going to offer up a quick
description of the hardware and describe our current approach as well as our
current development plan for the summer.  Our goal is to start pushing this
code upstream as quickly as possible, so comments and flames would be greatly
appreciated.

=====

The hardware layout is reasonably straight forward.  The GPU is, as expected,
a platform device located on the same die as the processor.  The registers are
mapped into the physical address space of the processor.  There device also
shares memory with the main processor; there is no dedicated memory on board
for the GPU. The GPU has a built in MMU for mapping paged memory.

Some processor variants also have a separate 2D core attached.  The 2D core is
also a platform device with shared memory and a MM. While the general interface
is similar to the 3D core, the 2D GPU has its own separate pipeline and
interrupt.

The core of the driver is a home-grown ioctl() API through a character device
we call /dev/kgsl.   Like other GPU drivers, all the heavy lifting is done by
the userspace drivers so the core set of ioctls are mainly used to access the
hardware or to manage contexts:

IOCTL_KGSL_DEVICE_GETPROPERTY
IOCTL_KGSL_DEVICE_REGREAD
IOCTL_KGSL_DEVICE_REGWRITE
IOCTL_KGSL_RINGBUFFER_ISSUEIBCMDS
IOCTL_KGSL_DEVICE_WAITTIMESTAMP
IOCTL_KGSL_CMDSTREAM_READTIMESTAMP
IOCTL_KGSL_DRAWCTXT_CREATE
IOCTL_KGSL_DRAWCTXT_DESTROY

In the early days of the driver, any memory used for the GPU (command buffers,
color buffers, etc) were allocated by the user space driver through PMEM (a
simple contiguous memory allocator written for Android; see
drivers/misc/pmem.c).  PMEM is not ideal because the contiguous memory it uses
needs to be carved out of bootmem at init time and is lost to the general
system pool, so the driver was switched to use paged memory allocated via
vmalloc() and mapped into the GPU MMU.  As a result, a handful of additional
ioctls were put into the driver to support allocating and managing the memory.

When we started our X11 effort last year, we needed a DRM driver to support
DRI2.  We added a DRM skeleton to the existing GPU driver (this was the driving
force behind the platform DRM patches I've sent out periodically).  The DRM
driver allocates its own memory for buffers to support GEM, and the buffers are
mapped into the GPU MMU prior to rendering.  It is important to note that the
DRM driver only provides GEM and basic DRM services - the userspace graphics
libraries still run rendering through the /dev/kgsl interface.

Then, when support came along for the 2D core, it turned out that most of the
support code was identical to that for the 3D GPU, with only a few differences
in how the command streams and interrupts were processed.  The 2D and 3D code
were merged together to form the driver that I linked above.  The ioctl calls
remained the same, and a "device" member was added to each structure to
determine which core the call was destined for.  Each device has its own MMU,
but each MMU shares the same pagetables.

It has been argued that having the 2D and the 3D together is silly since they
are separate platform devices and they should be treated as such - the
proposal is to have multiple device nodes, one for each device. Each device
will have its iomapped registers, MMU pagetables, etc; while sharing generic
support code in the driver:

/dev/kgsl/0
/dev/kgsl/1
etc..

I think that if we did this design we would also need to have an additional
device to allocate memory buffers which will make it easier for us to share
memory between cores (X11 in particular does a lot of simultaneous 2D and 3D).
I also think that the memory allocator should be transitioned to a standard
design (probably TTM).  Of course for X11 the DRM/GEM driver would still be
used with GEM turning into a wrapper for the shared memory allocator.

Thanks for reading,
Jordan

PS:  About the name: our userspace drivers uses a HAL called GSL (for graphics
support layer).  Elements of that HAL is what you see today in the kernel
driver, so we called it KGSL (Kernel GSL).  We used to have the driver in
drivers/char/msm_kgsl/ but we were convinced to move it to drivers/gpu/msm,
which is already a great improvement in naming and in location.  I presume
that one of the conditions of upstreaming would be to rename everything to
something a little bit more descriptive and a little bit less cryptic.

--
Jordan Crouse
Qualcomm Innovation Center
Qualcomm Innovation Center is a member of Code Aurora Forum
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux