From: Takashi Iwai <tiwai@xxxxxxx> commit 5c1733e33c888a3cb7f576564d8ad543d5ad4a9e upstream. Currently the standard memory allocator (snd_dma_malloc_pages*()) passes the byte size to allocate as is. Most of the backends allocates real pages, hence the actual allocations are aligned in page size. However, the genalloc doesn't seem assuring the size alignment, hence it may result in the access outside the buffer when the whole memory pages are exposed via mmap. For avoiding such inconsistencies, this patch makes the allocation size always to be aligned in page size. Note that, after this change, snd_dma_buffer.bytes field contains the aligned size, not the originally requested size. This value is also used for releasing the pages in return. Reviewed-by: Lars-Peter Clausen <lars@xxxxxxxxxx> Link: https://lore.kernel.org/r/20201218145625.2045-2-tiwai@xxxxxxx Signed-off-by: Takashi Iwai <tiwai@xxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- sound/core/memalloc.c | 1 + 1 file changed, 1 insertion(+) --- a/sound/core/memalloc.c +++ b/sound/core/memalloc.c @@ -177,6 +177,7 @@ int snd_dma_alloc_pages(int type, struct if (WARN_ON(!dmab)) return -ENXIO; + size = PAGE_ALIGN(size); dmab->dev.type = type; dmab->dev.dev = device; dmab->bytes = 0;