Re: [RFC] Heterogeneous memory management (mirror process address space on a device mmu).

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 06, 2014 at 11:22:48AM -0700, Linus Torvalds wrote:
> On Tue, May 6, 2014 at 11:13 AM, Jerome Glisse <j.glisse@xxxxxxxxx> wrote:
> >
> > I could grow the radix function to return some bool to avoid looping over for
> > case where there is no special entry.
> 
> .. or even just a bool (or counter) associated with the mapping to
> mark whether any special entries exist at all.
> 
> Also, the code to turn special entries is duplicated over and over
> again, usually together with a "FIXME - what about migration failure",
> so it would make sense to do that as it's own function.
> 

Migration failure is when something goes horribly wrong and GPU can not
copy back the page to system memory that philosophical question associated
is what to do about other process ? Make them SIGBUS ?

The answer so far is consider this as any kind of cpu thread that would
crash and only half write content it wanted into the page. So other thread
will use the lastest version of the data we have. Thread that triggered
the migration to the GPU memory would see a SIGBUS (those thread are GPU
aware as they use some form of GPU api such as OpenCL).

> But conceptually I don't hate it. I didn't much like having random
> hmm_pagecache_migrate() calls in core vm code, and code like this
> 
> +                       hmm_pagecache_migrate(mapping, swap);
> +                       spd.pages[page_nr] = find_get_page(mapping,
> index + page_nr);
> 
> looks fundamentally racy, and in other places you seemed to assume
> that all exceptional entries are always about hmm, which looked
> questionable. But those are details.  The concept of putting a special
> swap entry in the mapping radix trees I don't necessarily find
> objectionable per se.
> 
>            Linus

So far only shmem use special entry and my patchset did not support it
as i wanted to vet the design first.

The hmm_pagecache_migrate is the function that trigger migration back to
system memory. Once again the expectation is that such code path will
neve be call, only the process that use the GPU and the mmaped file will
ever access those pages and this process knows that it should not access
them while they are on the GPU so if it does it has to suffer the
consequences.

Thanks a lot for all the feedback, much appreciated.

Cheers,
Jérôme

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]