On 11/09/2011 09:22 PM, j.glisse@xxxxxxxxx wrote:
From: Jerome Glisse<jglisse@xxxxxxxxxx>
This is an overhaul of the ttm memory accounting. This tries to keep
the same global behavior while removing the whole zone concept. It
keeps a distrinction for dma32 so that we make sure that ttm don't
starve the dma32 zone.
There is 3 threshold for memory allocation :
- max_mem is the maximum memory the whole ttm infrastructure is
going to allow allocation for (exception of system process see
below)
- emer_mem is the maximum memory allowed for system process, this
limit is> to max_mem
- swap_limit is the threshold at which point ttm will start to
try to swap object because ttm is getting close the max_mem
limit
- swap_dma32_limit is the threshold at which point ttm will start
swap object to try to reduce the pressure on the dma32 zone. Note
that we don't specificly target object to swap to it might very
well free more memory from highmem rather than from dma32
Accounting is done through used_mem& used_dma32_mem, which sum give
the total amount of memory actually accounted by ttm.
Idea is that allocation will fail if (used_mem + used_dma32_mem)>
max_mem and if swapping fail to make enough room.
The used_dma32_mem can be updated as a later stage, allowing to
perform accounting test before allocating a whole batch of pages.
Jerome, you're removing a fair amount of functionality here, without
justifying
why it could be removed.
Consider a low-end system with 1G of kernel memory and 10G of highmem.
How do we avoid putting stress on the kernel memory? I also wouldn't be
too surprised if DMA32 zones appear in HIGHMEM systems in the future
making the current zone concept good to keep.
Also, in effect you move the DOS from *all* zones into the DMA32 zone
and create a race in that multiple simultaneous allocators can first
pre-allocate out of the global zone, and then update the DMA32 zone
without synchronization. In this way you might theoretically end up with
more DMA32 pages allocated than present in the zone.
With the proposed code there's also a theoretical problem in that a
potentially huge number of pages are unaccounted before they are
actually freed.
A possible way around all this is to pre-allocate out of *all* zones,
and after the big allocation release back memory to relevant zones. If
such a big allocation fails, one needs to revert back to a page-by-page
scheme.
/Thomas
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel