[PATCH 3/6] drm/amdgpu: reserve/unreserve vmid by vm ioctl v3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/27/2017 10:13 AM, zhoucm1 wrote:
>
>
> On 2017å¹´04æ??26æ?¥ 20:26, Christian König wrote:
>> Am 26.04.2017 um 13:10 schrieb Chunming Zhou:
>>> add reserve/unreserve vmid funtions.
>>> v3:
>>> only reserve vmid from gfxhub
>>>
>>> Change-Id: I5f80dc39dc9d44660a96a2b710b0dbb4d3b9039d
>>> Signed-off-by: Chunming Zhou <David1.Zhou at amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 70
>>> +++++++++++++++++++++++++++-------
>>>   1 file changed, 57 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index ec87d64..a783e8e 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -397,6 +397,11 @@ static bool amdgpu_vm_had_gpu_reset(struct
>>> amdgpu_device *adev,
>>>           atomic_read(&adev->gpu_reset_counter);
>>>   }
>>>   +static bool amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned
>>> vmhub)
>>> +{
>>> +    return !!vm->dedicated_vmid[vmhub];
>>> +}
>>> +
>>>   /**
>>>    * amdgpu_vm_grab_id - allocate the next free VMID
>>>    *
>>> @@ -546,6 +551,47 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct
>>> amdgpu_ring *ring,
>>>       return r;
>>>   }
>>>   +static void amdgpu_vm_free_dedicated_vmid(struct amdgpu_device *adev,
>>> +                      struct amdgpu_vm *vm,
>>> +                      unsigned vmhub)
>>> +{
>>> +    struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>> +
>>> +    mutex_lock(&id_mgr->lock);
>>> +    if (vm->dedicated_vmid[vmhub]) {
>>> +        list_add(&vm->dedicated_vmid[vmhub]->list,
>>> +            &id_mgr->ids_lru);
>>> +        vm->dedicated_vmid[vmhub] = NULL;
>>> +    }
>>> +    mutex_unlock(&id_mgr->lock);
>>> +}
>>> +
>>> +static int amdgpu_vm_alloc_dedicated_vmid(struct amdgpu_device *adev,
>>> +                      struct amdgpu_vm *vm,
>>> +                      unsigned vmhub)
>>> +{
>>> +    struct amdgpu_vm_id_manager *id_mgr;
>>> +    struct amdgpu_vm_id *idle;
>>> +    int r;
>>> +
>>> +    id_mgr = &adev->vm_manager.id_mgr[vmhub];
>>> +    mutex_lock(&id_mgr->lock);
>>> +    /* Select the first entry VMID */
>>> +    idle = list_first_entry(&id_mgr->ids_lru, struct amdgpu_vm_id, list);
>>> +    list_del_init(&idle->list);
>>> +    vm->dedicated_vmid[vmhub] = idle;
>>
>> We need a check here if there isn't an ID already reserved for the VM.
>>
>> Additional to that I would still rename the field to reserved_vmid, that
>> describes better what it is actually doing.
> the vmid is global, reserve it from global lru, so it makes sense to name
> 'reserved_vmid' like in patch#4.
> the reserved vmid is dedicated to one vm, so the field in vm is
> 'dedicated_vmid' like here, which is my reason to name it.

Just the same thing looking at different views.

IMHO, it's reserved vmid from global group, naming "reserved_vmid".
And in Patch#4, it constrains the number, possibly naming "reserved_vmid_seq" 
or "reserved_vmid_ref".

Any of them looks reasonable, select the one you like. :)

Jerry

>
> Anyway, the alternative is ok to me although I prefer mine, I hope we can
> deliver this solution by today, many RGP issues depend on it.
>
> Thanks,
> David Zhou
>>
>>> +    mutex_unlock(&id_mgr->lock);
>>> +
>>> +    r = amdgpu_sync_wait(&idle->active);
>>> +    if (r)
>>> +        goto err;
>>
>> This is racy. A command submission could happen while we wait for the id to
>> become idle.
>>
>> The easiest way to solve this is to hold the lock while waiting for the ID to
>> become available.
>>
>>> +
>>> +    return 0;
>>> +err:
>>> +    amdgpu_vm_free_dedicated_vmid(adev, vm, vmhub);
>>> +    return r;
>>> +}
>>> +
>>>   static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
>>>   {
>>>       struct amdgpu_device *adev = ring->adev;
>>> @@ -2316,18 +2362,8 @@ void amdgpu_vm_fini(struct amdgpu_device *adev,
>>> struct amdgpu_vm *vm)
>>>         amdgpu_vm_free_levels(&vm->root);
>>>       fence_put(vm->last_dir_update);
>>> -    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++) {
>>> -        struct amdgpu_vm_id_manager *id_mgr =
>>> -            &adev->vm_manager.id_mgr[i];
>>> -
>>> -        mutex_lock(&id_mgr->lock);
>>> -        if (vm->dedicated_vmid[i]) {
>>> -            list_add(&vm->dedicated_vmid[i]->list,
>>> -                 &id_mgr->ids_lru);
>>> -            vm->dedicated_vmid[i] = NULL;
>>> -        }
>>> -        mutex_unlock(&id_mgr->lock);
>>> -    }
>>> +    for (i = 0; i < AMDGPU_MAX_VMHUBS; i++)
>>> +        amdgpu_vm_free_dedicated_vmid(adev, vm, i);
>>>   }
>>>     /**
>>> @@ -2400,11 +2436,19 @@ int amdgpu_vm_ioctl(struct drm_device *dev, void
>>> *data, struct drm_file *filp)
>>>       union drm_amdgpu_vm *args = data;
>>>       struct amdgpu_device *adev = dev->dev_private;
>>>       struct amdgpu_fpriv *fpriv = filp->driver_priv;
>>> +    int r;
>>>         switch (args->in.op) {
>>>       case AMDGPU_VM_OP_RESERVE_VMID:
>>> +        /* current, we only have requirement to reserve vmid from gfxhub */
>>> +        if (!amdgpu_vm_dedicated_vmid_ready(&fpriv->vm, 0)) {
>>> +            r = amdgpu_vm_alloc_dedicated_vmid(adev, &fpriv->vm, 0);
>>
>> This is racy as well.
>>
>> Two threads could try to reserve a VMID at the same time. You need to
>> integrate the check if it's already allocated into
>> amdgpu_vm_alloc_dedicated_vmid().
>>
>> Regards,
>> Christian.
>>
>>> +            if (r)
>>> +                return r;
>>> +        }
>>> +        break;
>>>       case AMDGPU_VM_OP_UNRESERVE_VMID:
>>> -        return -EINVAL;
>>> +        amdgpu_vm_free_dedicated_vmid(adev, &fpriv->vm, 0);
>>>           break;
>>>       default:
>>>           return -EINVAL;
>>
>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux