Am 12.07.19 um 10:03 schrieb Chris Wilson: > Since kmalloc() will round up the allocation to the next slab size or > page, it will normally return a pointer to a memory block bigger than we > asked for. We can query for the actual size of the allocated block using > ksize() and expand our variable size reservation_list to take advantage > of that extra space. > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Christian König <christian.koenig@xxxxxxx> > Cc: Michel Dänzer <michel.daenzer@xxxxxxx> Reviewed-by: Christian König <christian.koenig@xxxxxxx> BTW: I was wondering if we shouldn't replace the reservation_object_list with a dma_fence_chain. That would costs us a bit more memory and is slightly slower on querying the fence in the container. But it would be much faster on adding new fences and massively simplifies waiting or returning all fences currently in the container. Christian. > --- > drivers/dma-buf/reservation.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c > index a6ac2b3a0185..80ecc1283d15 100644 > --- a/drivers/dma-buf/reservation.c > +++ b/drivers/dma-buf/reservation.c > @@ -153,7 +153,9 @@ int reservation_object_reserve_shared(struct reservation_object *obj, > RCU_INIT_POINTER(new->shared[j++], fence); > } > new->shared_count = j; > - new->shared_max = max; > + new->shared_max = > + (ksize(new) - offsetof(typeof(*new), shared)) / > + sizeof(*new->shared); > > preempt_disable(); > write_seqcount_begin(&obj->seq); > @@ -169,7 +171,7 @@ int reservation_object_reserve_shared(struct reservation_object *obj, > return 0; > > /* Drop the references to the signaled fences */ > - for (i = k; i < new->shared_max; ++i) { > + for (i = k; i < max; ++i) { > struct dma_fence *fence; > > fence = rcu_dereference_protected(new->shared[i], _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx