Hi Brad,
On 06/18/2014 05:36 PM, bradley.d.volkin@xxxxxxxxx wrote:
From: Brad Volkin <bradley.d.volkin@xxxxxxxxx>
This adds a small module for managing a pool of batch buffers.
The only current use case is for the command parser, as described
in the kerneldoc in the patch. The code is simple, but separating
it out makes it easier to change the underlying algorithms and to
extend to future use cases should they arise.
The interface is simple: alloc to create an empty pool, free to
clean it up; get to obtain a new buffer, put to return it to the
pool. Note that all buffers must be returned to the pool before
freeing it.
The pool has a maximum number of buffers allowed due to some tests
(e.g. gem_exec_nop) creating a very large number of buffers (e.g.
___). Buffers are purgeable while in the pool, but not explicitly
truncated in order to avoid overhead during execbuf.
Locking is currently based on the caller holding the struct_mutex.
We already do that in the places where we will use the batch pool
for the command parser.
Signed-off-by: Brad Volkin <bradley.d.volkin@xxxxxxxxx>
---
r.e. pool capacity
My original testing showed something like thousands of buffers in
the pool after a gem_exec_nop run. But when I reran with the max
check disabled just now to get an actual number for the commit
message, the number was more like 130. I developed and tested the
changes incrementally, and suspect that the original run was before
I implemented the actual copy operation. So I'm inclined to remove
or at least increase the cap in the final version. Thoughts?
Some random thoughts:
Is it strictly necessary to cap the pool size? I ask because it seems to
be introducing a limit where so far there wasn't an explicit one.
Are object sizes generally page aligned or all you've seen all sizes in
the distribution? Either way, I am thinking whether some sort of
rounding up would be more efficient? Would it cause a problem if
slightly larger object was returned?
Given that objects are only ever added to the pool, once max number is
allocated and there are no free ones of exact size it nags userspace
with EAGAIN and retires objects. But I wonder if the above points could
reduce that behaviour?
Could we get away without tracking the given out objects in a list and
just keep a list of available ones? In which case if object can only
ever be either in the free pool or on one of the existing GEM
active/inactive lists the same list head could be used?
Could it use its own locking just as easily? Just thinking if the future
goal is to fine grain locking, this seems self contained enough to be
doable straight away unless I am missing something.
The above would make the pool more generic, but then I read Chris's
reply which perhaps suggests to make it more specialised so I don't know.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx