On Thu, Jan 13, 2022 at 5:05 AM Christian König <christian.koenig@xxxxxxx> wrote: > Am 13.01.22 um 14:00 schrieb Ruhl, Michael J: > >> -----Original Message----- > >> From: dri-devel <dri-devel-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of > >> Ruhl, Michael J > >>> -----Original Message----- > >>> From: dri-devel <dri-devel-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of > >>> guangming.cao@xxxxxxxxxxxx > >>> + /* > >>> + * Invalid size check. The "len" should be less than totalram. > >>> + * > >>> + * Without this check, once the invalid size allocation runs on a process > >>> that > >>> + * can't be killed by OOM flow(such as "gralloc" on Android devices), it > >>> will > >>> + * cause a kernel exception, and to make matters worse, we can't find > >>> who are using > >>> + * so many memory with "dma_buf_debug_show" since the relevant > >>> dma-buf hasn't exported. > >>> + */ > >>> + if (len >> PAGE_SHIFT > totalram_pages()) > >> If your "heap" is from cma, is this still a valid check? > > And thinking a bit further, if I create a heap from something else (say device memory), > > you will need to be able to figure out the maximum allowable check for the specific > > heap. > > > > Maybe the heap needs a callback for max size? > > Well we currently maintain a separate allocator and don't use dma-heap, > but yes we have systems with 16GiB device and only 8GiB system memory so > that check here is certainly not correct. Good point. > In general I would rather let the system run into -ENOMEM or -EINVAL > from the allocator instead. Probably the simpler solution is to push the allocation check to the heap driver, rather than doing it at the top level here. For CMA or other contiguous heaps, letting the allocator fail is fast enough. For noncontiguous buffers, like the system heap, the allocation can burn a lot of time and consume a lot of memory (causing other trouble) before a large allocation might naturally fail. thanks -john