Re: [PATCH 2/2] drm/lima: driver for ARM Mali4xx GPUs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 14, 2019 at 10:12 AM Christian König via dri-devel
<dri-devel@xxxxxxxxxxxxxxxxxxxxx> wrote:
>
> Am 14.02.19 um 03:52 schrieb Alex Deucher via dri-devel:
> > [SNIP]
> >>>>> +static int lima_ioctl_gem_va(struct drm_device *dev, void *data, struct drm_file *file)
> >>>>> +{
> >>>>> +       struct drm_lima_gem_va *args = data;
> >>>>> +
> >>>>> +       switch (args->op) {
> >>>>> +       case LIMA_VA_OP_MAP:
> >>>>> +               return lima_gem_va_map(file, args->handle, args->flags, args->va);
> >>>>> +       case LIMA_VA_OP_UNMAP:
> >>>>> +               return lima_gem_va_unmap(file, args->handle, args->va);
> >>>> These are mapping to GPU VA. Why not do that on GEM object creation or
> >>>> import or when the objects are submitted with cmd queue as other
> >>>> drivers do?
> >>>>
> >>>> To put it another way, These ioctls look different than what other
> >>>> drivers do. Why do you need to do things differently? My understanding
> >>>> is best practice is to map and return the GPU offset when the GEM
> >>>> object is created. This is what v3d does. I think Intel is moving to
> >>>> that. And panfrost will do that.
> >>> I think it would be a good idea to look at the amdgpu driver.  This
> >>> driver is heavily modeled after it.  Basically the GEM VA ioctl allows
> >>> userspace to manage per process (per fd really) virtual addresses.
> >> Why do you want userspace to manage assigning VAs versus the kernel to
> >> do so? Exposing that detail to userspace means the driver must support
> >> a per process address space. Letting the kernel assign addresses means
> >> it can either be a single address space or be a per process address
> >> space. It seems to me more flexible to allow the kernel driver to
> >> evolve without that ABI.
> > Having it in userspace provides a lot more flexibility and makes it
> > easier to support things like unified address space between CPU and
> > GPU. I guess it depends on the hw as to what is the right choice.
>
> To summarize we actually have tried this approach with the radeon and it
> turned out to be a really bad mistake.
>
> To implement features like partial residential textures and shared
> virtual address space you absolutely need userspace to be in charge of
> allocating virtual addresses.

Yeah same here, as soon as you have per-process address spaces you
want your userspace to control where buffers are placed. All new intel
drivers use softpin to control the layout fully (anv and iris). Of
course if you also have hw without per-process virtual address space
on the gpu in some form, then the kernel needs to assign addresses,
which means lots of relocs. i965_dri.so still works like that, even
with the rewritten buffer/batch manager. But I'd really only do that
if you can't avoid it.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux