Re: [PATCH 2/2] drm/lima: driver for ARM Mali4xx GPUs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 12, 2019 at 10:24 AM Alex Deucher <alexdeucher@xxxxxxxxx> wrote:
>
> On Tue, Feb 12, 2019 at 10:53 AM Rob Herring via dri-devel
> <dri-devel@xxxxxxxxxxxxxxxxxxxxx> wrote:
> >
> > On Wed, Feb 6, 2019 at 7:16 AM Qiang Yu <yuq825@xxxxxxxxx> wrote:
> > >
> > > From: Lima Project Developers <lima@xxxxxxxxxxxxxxxxxxxxx>

[...]

> > > +static int lima_ioctl_gem_va(struct drm_device *dev, void *data, struct drm_file *file)
> > > +{
> > > +       struct drm_lima_gem_va *args = data;
> > > +
> > > +       switch (args->op) {
> > > +       case LIMA_VA_OP_MAP:
> > > +               return lima_gem_va_map(file, args->handle, args->flags, args->va);
> > > +       case LIMA_VA_OP_UNMAP:
> > > +               return lima_gem_va_unmap(file, args->handle, args->va);
> >
> > These are mapping to GPU VA. Why not do that on GEM object creation or
> > import or when the objects are submitted with cmd queue as other
> > drivers do?
> >
> > To put it another way, These ioctls look different than what other
> > drivers do. Why do you need to do things differently? My understanding
> > is best practice is to map and return the GPU offset when the GEM
> > object is created. This is what v3d does. I think Intel is moving to
> > that. And panfrost will do that.
>
> I think it would be a good idea to look at the amdgpu driver.  This
> driver is heavily modeled after it.  Basically the GEM VA ioctl allows
> userspace to manage per process (per fd really) virtual addresses.

Why do you want userspace to manage assigning VAs versus the kernel to
do so? Exposing that detail to userspace means the driver must support
a per process address space. Letting the kernel assign addresses means
it can either be a single address space or be a per process address
space. It seems to me more flexible to allow the kernel driver to
evolve without that ABI.

With any new driver in the kernel, the question is always which
existing one is the best model to follow. I don't think Intel, AMD or
Nouveau are good examples to follow because they have a lot of history
and legacy, are both GPU and DC, and have separate graphics memory
(except Intel I guess). The GPUs in ARM land have none of those
really. Looking thru freedreno, etnaviv, and v3d mostly, I see they
all have similar user ABIs. But they are all different based on what
driver they copied and how they've evolved. I know it's a big can of
worms, but it would be nice to have some alignment of ABIs. I know the
reasons why there isn't, but it's frustrating that 11 out of 60K IGT
tests will run. I don't think a common ABI matters much for the big 3,
but in the ARM zoo I think it does. At least if the interfaces are
kept similar, then having common code shared among the embedded GPUs
would be easier and writing some IGT shim for each driver would be
easier.


Rob
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux