Re: [PATCH] drm/ttm: stop warning on TT shrinker failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 23.03.21 um 13:37 schrieb Michal Hocko:
On Tue 23-03-21 13:21:32, Christian König wrote:
Am 23.03.21 um 13:04 schrieb Michal Hocko:
On Tue 23-03-21 12:48:58, Christian König wrote:
Am 23.03.21 um 12:28 schrieb Daniel Vetter:
On Tue, Mar 23, 2021 at 08:38:33AM +0100, Michal Hocko wrote:
On Mon 22-03-21 20:34:25, Christian König wrote:
[...]
My only concern is that if I could rely on memalloc_no* being used we could
optimize this quite a bit further.
Yes you can use the scope API and you will be guaranteed that _any_
allocation from the enclosed context will inherit GFP_NO* semantic.
The question is if this is also guaranteed the other way around?

In other words if somebody calls get_free_page(GFP_NOFS) are the context
flags set as well?
gfp mask is always restricted in the page allocator. So say you have
noio scope context and call get_free_page/kmalloc(GFP_NOFS) then the
scope would restrict the allocation flags to GFP_NOIO (aka drop
__GFP_IO). For further details, have a look at current_gfp_context
and its callers.

Does this answer your question?
But what happens if you don't have noio scope and somebody calls
get_free_page(GFP_NOFS)?
Then this will be a regular NOFS request. Let me repeat scope API will
further restrict any requested allocation mode.

Ok, got it.


Is then the noio scope added automatically? And is it possible that the
shrinker gets called without noio scope even we would need it?
Here you have lost me again.

I think this is where I don't get yet what Christian tries to do: We
really shouldn't do different tricks and calling contexts between direct
reclaim and kswapd reclaim. Otherwise very hard to track down bugs are
pretty much guaranteed. So whether we use explicit gfp flags or the
context apis, result is exactly the same.
Ok let us recap what TTMs TT shrinker does here:

1. We got memory which is not swapable because it might be accessed by the
GPU at any time.
2. Make sure the memory is not accessed by the GPU and driver need to grab a
lock before they can make it accessible again.
3. Allocate a shmem file and copy over the not swapable pages.
This is quite tricky because the shrinker operates in the PF_MEMALLOC
context so such an allocation would be allowed to completely deplete
memory unless you explicitly mark that context as __GFP_NOMEMALLOC.
Thanks, exactly that was one thing I was absolutely not sure about. And yes
I agree that this is really tricky.

Ideally I would like to be able to trigger swapping out the shmem page I
allocated immediately after doing the copy.
So let me try to rephrase to make sure I understand. You would like to
swap out the existing content from the shrinker and you use shmem as a
way to achieve that. The swapout should happen at the time of copying
(shrinker context) or shortly afterwards?

So effectively to call pageout() on the shmem page after the copy?

Yes, exactly that.

This way I would only need a single page for the whole shrink operation at
any given time.
What do you mean by that? You want the share the same shmem page for
other copy+swapout?

Correct, yes.

The idea is that we can swap out the content of a full GPU buffer object this way to give the backing store of the object back to the core memory managment.

Regards,
Christian.
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux