Re: [PATCH] drm/panfrost: Implement per FD address spaces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/08/2019 04:01, Rob Herring wrote:
[...]
> I was worried too. It seems to be working pretty well though, but more
> testing would be good. I don't think there are a lot of usecases that
> use more AS than the h/w has (8 on T860), but I'm not sure.

Yeah, 8 is overkill. Some GPUs only have 4 which is a little tight and
might come to bite when supporting queueing on the GPU. In this patch
panfrost_mmu_as_get() will simply WARN() then crash if there isn't a
free AS:

> 		WARN_ON(!lru_mmu);
> 
> 		list_del_init(&lru_mmu->list);
> 		as = lru_mmu->as;

This isn't a problem at the moment (there's a maximum of 2 jobs on the
GPU at the moment). But when you start queueing jobs it's possible for
each job to belong to a different address space. With three slots and
for each you can have one job running and one waiting that's a minimum
of 6 ASes, plus you might want one configured to dump counters. So a
total of 7 are needed to avoid having to wait. Hardware designers like
powers of 2 so we have 8.

kbase also can be lazy about dealing with completed jobs - this allows
even more jobs to be considered "on the GPU" so even with 8 ASes it is
possible to "run out"!

Steve
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux