Hi Jerome,
On 04/15/2013 11:38 PM, Jerome Glisse wrote:
On Mon, Apr 15, 2013 at 4:39 AM, Simon
Jeons <simon.jeons@xxxxxxxxx>
wrote:
Hi Jerome,
On 02/10/2013 12:29 AM, Jerome Glisse wrote:
On Sat, Feb 9, 2013 at 1:05 AM, Michel Lespinasse <walken@xxxxxxxxxx> wrote:
On Fri, Feb 8, 2013 at 3:18 AM, Shachar Raindel <raindel@xxxxxxxxxxxx>
wrote:
Hi,
We would like to present a reference implementation
for safely sharing
memory pages from user space with the hardware,
without pinning.
We will be happy to hear the community feedback on
our prototype
implementation, and suggestions for future
improvements.
We would also like to discuss adding features to the
core MM subsystem to
assist hardware access to user memory without
pinning.
This sounds kinda scary TBH; however I do understand
the need for such
technology.
I think one issue is that many MM developers are
insufficiently aware
of such developments; having a technology presentation
would probably
help there; but traditionally LSF/MM sessions are more
interactive
between developers who are already quite familiar with
the technology.
I think it would help if you could send in advance a
detailed
presentation of the problem and the proposed solutions
(and then what
they require of the MM layer) so people can be better
prepared.
And first I'd like to ask, aren't IOMMUs supposed to
already largely
solve this problem ? (probably a dumb question, but
that just tells
you how much you need to explain :)
For GPU the motivation is three fold. With the advance
of GPU compute
and also with newer graphic program we see a massive
increase in GPU
memory consumption. We easily can reach buffer that are
bigger than
1gbytes. So the first motivation is to directly use the
memory the
user allocated through malloc in the GPU this avoid
copying 1gbytes of
data with the cpu to the gpu buffer. The second and
mostly important
The pinned memory you mentioned is the memory user allocated
or the memory of gpu buffer?
Memory user allocated, we don't want to pin this memory.
After this idea merged, we don't need to allocate memory for
integrated GPU buffer and discrete GPU don't need to have its own
memory, correct?
Cheers,
Jerome
|