RE: [PATCH 02/83] drm/radeon: reduce number of free VMIDs and pipes in KV

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




>-----Original Message-----
>From: dri-devel [mailto:dri-devel-bounces@xxxxxxxxxxxxxxxxxxxxx] On Behalf
>Of Alex Deucher
>Sent: Friday, July 11, 2014 12:23 PM
>To: Koenig, Christian
>Cc: Oded Gabbay; Lewycky, Andrew; LKML; Maling list - DRI developers;
>Deucher, Alexander
>Subject: Re: [PATCH 02/83] drm/radeon: reduce number of free VMIDs and
>pipes in KV
>
>On Fri, Jul 11, 2014 at 12:18 PM, Christian König <christian.koenig@xxxxxxx>
>wrote:
>> Am 11.07.2014 18:05, schrieb Jerome Glisse:
>>
>>> On Fri, Jul 11, 2014 at 12:50:02AM +0300, Oded Gabbay wrote:
>>>>
>>>> To support HSA on KV, we need to limit the number of vmids and pipes
>>>> that are available for radeon's use with KV.
>>>>
>>>> This patch reserves VMIDs 8-15 for KFD (so radeon can only use VMIDs
>>>> 0-7) and also makes radeon thinks that KV has only a single MEC with
>>>> a single pipe in it
>>>>
>>>> Signed-off-by: Oded Gabbay <oded.gabbay@xxxxxxx>
>>>
>>> Reviewed-by: Jérôme Glisse <jglisse@xxxxxxxxxx>
>>
>>
>> At least fro the VMIDs on demand allocation should be trivial to
>> implement, so I would rather prefer this instead of a fixed assignment.
>
>IIRC, the way the CP hw scheduler works you have to give it a range of vmids
>and it assigns them dynamically as queues are mapped so effectively they
>are potentially in use once the CP scheduler is set up.
>
>Alex

Right. The SET_RESOURCES packet (kfd_pm4_headers.h, added in patch 49) allocates a range of HW queues, VMIDs and GDS to the HW scheduler, then the scheduler uses the allocated VMIDs to support a potentially larger number of user processes by dynamically mapping PASIDs to VMIDs and memory queue descriptors (MQDs) to HW queues.

BTW Oded I think we have some duplicated defines at the end of kfd_pm4_headers.h, if they are really duplicates it would be great to remove those before the pull request.

Thanks,
JB

>
>
>>
>> Christian.
>>
>>
>>>
>>>> ---
>>>>   drivers/gpu/drm/radeon/cik.c | 48
>>>> ++++++++++++++++++++++----------------------
>>>>   1 file changed, 24 insertions(+), 24 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/radeon/cik.c
>>>> b/drivers/gpu/drm/radeon/cik.c index 4bfc2c0..e0c8052 100644
>>>> --- a/drivers/gpu/drm/radeon/cik.c
>>>> +++ b/drivers/gpu/drm/radeon/cik.c
>>>> @@ -4662,12 +4662,11 @@ static int cik_mec_init(struct radeon_device
>>>> *rdev)
>>>>         /*
>>>>          * KV:    2 MEC, 4 Pipes/MEC, 8 Queues/Pipe - 64 Queues total
>>>>          * CI/KB: 1 MEC, 4 Pipes/MEC, 8 Queues/Pipe - 32 Queues
>>>> total
>>>> +        * Nonetheless, we assign only 1 pipe because all other
>>>> + pipes
>>>> will
>>>> +        * be handled by KFD
>>>>          */
>>>> -       if (rdev->family == CHIP_KAVERI)
>>>> -               rdev->mec.num_mec = 2;
>>>> -       else
>>>> -               rdev->mec.num_mec = 1;
>>>> -       rdev->mec.num_pipe = 4;
>>>> +       rdev->mec.num_mec = 1;
>>>> +       rdev->mec.num_pipe = 1;
>>>>         rdev->mec.num_queue = rdev->mec.num_mec * rdev-
>>mec.num_pipe * 8;
>>>>         if (rdev->mec.hpd_eop_obj == NULL) { @@ -4809,28 +4808,24 @@
>>>> static int cik_cp_compute_resume(struct radeon_device *rdev)
>>>>         /* init the pipes */
>>>>         mutex_lock(&rdev->srbm_mutex);
>>>> -       for (i = 0; i < (rdev->mec.num_pipe * rdev->mec.num_mec); i++) {
>>>> -               int me = (i < 4) ? 1 : 2;
>>>> -               int pipe = (i < 4) ? i : (i - 4);
>>>>   -             eop_gpu_addr = rdev->mec.hpd_eop_gpu_addr + (i *
>>>> MEC_HPD_SIZE * 2);
>>>> +       eop_gpu_addr = rdev->mec.hpd_eop_gpu_addr;
>>>>   -             cik_srbm_select(rdev, me, pipe, 0, 0);
>>>> +       cik_srbm_select(rdev, 0, 0, 0, 0);
>>>>   -             /* write the EOP addr */
>>>> -               WREG32(CP_HPD_EOP_BASE_ADDR, eop_gpu_addr >> 8);
>>>> -               WREG32(CP_HPD_EOP_BASE_ADDR_HI,
>>>> upper_32_bits(eop_gpu_addr) >> 8);
>>>> +       /* write the EOP addr */
>>>> +       WREG32(CP_HPD_EOP_BASE_ADDR, eop_gpu_addr >> 8);
>>>> +       WREG32(CP_HPD_EOP_BASE_ADDR_HI,
>upper_32_bits(eop_gpu_addr)
>>>> + >>
>>>> 8);
>>>>   -             /* set the VMID assigned */
>>>> -               WREG32(CP_HPD_EOP_VMID, 0);
>>>> +       /* set the VMID assigned */
>>>> +       WREG32(CP_HPD_EOP_VMID, 0);
>>>> +
>>>> +       /* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
>>>> +       tmp = RREG32(CP_HPD_EOP_CONTROL);
>>>> +       tmp &= ~EOP_SIZE_MASK;
>>>> +       tmp |= order_base_2(MEC_HPD_SIZE / 8);
>>>> +       WREG32(CP_HPD_EOP_CONTROL, tmp);
>>>>   -             /* set the EOP size, register value is 2^(EOP_SIZE+1)
>>>> dwords */
>>>> -               tmp = RREG32(CP_HPD_EOP_CONTROL);
>>>> -               tmp &= ~EOP_SIZE_MASK;
>>>> -               tmp |= order_base_2(MEC_HPD_SIZE / 8);
>>>> -               WREG32(CP_HPD_EOP_CONTROL, tmp);
>>>> -       }
>>>> -       cik_srbm_select(rdev, 0, 0, 0, 0);
>>>>         mutex_unlock(&rdev->srbm_mutex);
>>>>         /* init the queues.  Just two for now. */ @@ -5876,8
>>>> +5871,13 @@ int cik_ib_parse(struct radeon_device *rdev, struct
>>>> radeon_ib *ib)
>>>>    */
>>>>   int cik_vm_init(struct radeon_device *rdev)
>>>>   {
>>>> -       /* number of VMs */
>>>> -       rdev->vm_manager.nvm = 16;
>>>> +       /*
>>>> +        * number of VMs
>>>> +        * VMID 0 is reserved for Graphics
>>>> +        * radeon compute will use VMIDs 1-7
>>>> +        * KFD will use VMIDs 8-15
>>>> +        */
>>>> +       rdev->vm_manager.nvm = 8;
>>>>         /* base offset of vram pages */
>>>>         if (rdev->flags & RADEON_IS_IGP) {
>>>>                 u64 tmp = RREG32(MC_VM_FB_OFFSET);
>>>> --
>>>> 1.9.1
>>>>
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@xxxxxxxxxxxxxxxxxxxxx
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>_______________________________________________
>dri-devel mailing list
>dri-devel@xxxxxxxxxxxxxxxxxxxxx
>http://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel





[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux