Re: Interacting with coherent memory on external devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 24 Apr 2015, Jerome Glisse wrote:

> > Right this is how things work and you could improve on that. Stay with the
> > scheme. Why would that not work if you map things the same way in both
> > environments if both accellerator and host processor can acceess each
> > others memory?
>
> Again and again share address space, having a pointer means the same thing
> for the GPU than it means for the CPU ie having a random pointer point to
> the same memory whether it is accessed by the GPU or the CPU. While also
> keeping the property of the backing memory. It can be share memory from
> other process, a file mmaped from disk or simply anonymous memory and
> thus we have no control whatsoever on how such memory is allocated.

Still no answer as to why is that not possible with the current scheme?
You keep on talking about pointers and I keep on responding that this is a
matter of making the address space compatible on both sides.

> Then you had transparent migration (transparent in the sense that we can
> handle CPU page fault on migrated memory) and you will see that you need
> to modify the kernel to become aware of this and provide a common code
> to deal with all this.

If the GPU works like a CPU (which I keep hearing) then you should also be
able to run a linu8x kernel on it and make it a regular NUMA node. Hey why
dont we make the host cpu a GPU (hello Xeon Phi).


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]