Re: [LSF/MM TOPIC] Hardware initiated paging of user process pages, hardware access to the CPU page tables of user processes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 10, 2013 at 09:57:02AM +0800, Simon Jeons wrote:
> Hi Jerome,
> On 02/10/2013 12:29 AM, Jerome Glisse wrote:
> >On Sat, Feb 9, 2013 at 1:05 AM, Michel Lespinasse <walken@xxxxxxxxxx> wrote:
> >>On Fri, Feb 8, 2013 at 3:18 AM, Shachar Raindel <raindel@xxxxxxxxxxxx> wrote:
> >>>Hi,
> >>>
> >>>We would like to present a reference implementation for safely sharing
> >>>memory pages from user space with the hardware, without pinning.
> >>>
> >>>We will be happy to hear the community feedback on our prototype
> >>>implementation, and suggestions for future improvements.
> >>>
> >>>We would also like to discuss adding features to the core MM subsystem to
> >>>assist hardware access to user memory without pinning.
> >>This sounds kinda scary TBH; however I do understand the need for such
> >>technology.
> >>
> >>I think one issue is that many MM developers are insufficiently aware
> >>of such developments; having a technology presentation would probably
> >>help there; but traditionally LSF/MM sessions are more interactive
> >>between developers who are already quite familiar with the technology.
> >>I think it would help if you could send in advance a detailed
> >>presentation of the problem and the proposed solutions (and then what
> >>they require of the MM layer) so people can be better prepared.
> >>
> >>And first I'd like to ask, aren't IOMMUs supposed to already largely
> >>solve this problem ? (probably a dumb question, but that just tells
> >>you how much you need to explain :)
> >For GPU the motivation is three fold. With the advance of GPU compute
> >and also with newer graphic program we see a massive increase in GPU
> >memory consumption. We easily can reach buffer that are bigger than
> >1gbytes. So the first motivation is to directly use the memory the
> >user allocated through malloc in the GPU this avoid copying 1gbytes of
> >data with the cpu to the gpu buffer. The second and mostly important
> >to GPU compute is the use of GPU seamlessly with the CPU, in order to
> >achieve this you want the programmer to have a single address space on
> >the CPU and GPU. So that the same address point to the same object on
> >GPU as on the CPU. This would also be a tremendous cleaner design from
> >driver point of view toward memory management.
> 
> When GPU will comsume memory?
> 
> The userspace process like mplayer will have video datas and GPU
> will play this datas and use memory of mplayer since these video
> datas load in mplayer process's address space? So GPU codes will
> call gup to take a reference of memory? Please correct me if my
> understanding is wrong. ;-)

First target is not thing such as video decompression, however they could
too benefit from it given updated driver kernel API. In case of using
iommu hardware page fault we don't call get_user_pages (gup) those we
don't take a reference on the page. That's the whole point of the hardware
pagefault, not taking reference on the page.

Cheers,
Jerome Glisse

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]