Hi, Roger, On 02/06/2018 10:04 AM, Roger He wrote: > currently ttm code has no any allocation limit. So it allows pages > allocatation unlimited until OOM. Because if swap space is full > of swapped pages and then system memory will be filled up with ttm > pages. and then any memory allocation request will trigger OOM. > I'm a bit curious, isn't this the way things are supposed to work on a linux system? If all memory resources are used up, the OOM killer will kill the most memory hungry (perhaps rogue) process rather than processes being nice and try to find out themselves whether allocations will succeed? Why should TTM be different in that aspect? It would be good to know your reasoning WRT this? Admittedly, graphics process OOM memory accounting doesn't work very well, due to not all BOs not being CPU mapped, but it looks like there is recent work towards fixing this? One thing I looked at at one point was to have TTM do the swapping itself instead of handing it off to the shmem system. That way we could pre-allocate swap entries for all swappable (BO) memory, making sure that we wouldn't run out of swap space when, for example, hibernating and that would also limit the pinned non-swappable memory (from TTM driver kernel allocations for example) to half the system memory resources. Thanks, Thomas