On Tue, 02 Mar 2021 22:13:11 +0800, Christian König wrote: > > Am 02.03.21 um 07:42 schrieb Chen Li: > > The number of chunks/chunks_array may be passed in > > by userspace and can be large. > > I'm wondering if we shouldn't rather restrict the number of chunks. If the number is restrict, will there be any risk and what's the proper number here? > > > It has been observed to cause kcalloc failures from trinity fuzzy test: > > > > ``` > > WARNING: CPU: 0 PID: 5487 at mm/page_alloc.c:4385 > > __alloc_pages_nodemask+0x2d8/0x14d0 > > > > ...... > > > > Trace: > > __warn.part.4+0x11c/0x174 > > __alloc_pages_nodemask+0x2d8/0x14d0 > > warn_slowpath_null+0x84/0xb0 > > __alloc_pages_nodemask+0x2d8/0x14d0 > > __alloc_pages_nodemask+0x2d8/0x14d0 > > alloc_pages_current+0xf0/0x1b0 > > free_buffer_head+0x88/0xf0 > > jbd2_journal_try_to_free_buffers+0x1e0/0x2a0 > > ext4_releasepage+0x84/0x140 > > release_pages+0x414/0x4c0 > > release_pages+0x42c/0x4c0 > > __find_get_block+0x1a4/0x5b0 > > alloc_pages_current+0xcc/0x1b0 > > kmalloc_order+0x30/0xb0 > > __kmalloc+0x300/0x390 > > kmalloc_order_trace+0x48/0x110 > > __kmalloc+0x300/0x390 > > radeon_cs_parser_init.part.1+0x74/0x670 [radeon] > > crypto_shash_update+0x5c/0x1c0 > > radeon_cs_parser_init.part.1+0x74/0x670 [radeon] > > __wake_up_common_lock+0xb8/0x210 > > radeon_cs_ioctl+0xc8/0xb80 [radeon] > > radeon_cs_ioctl+0x50/0xb80 [radeon] > > drm_ioctl_kernel+0xf4/0x160 > > radeon_cs_ioctl+0x0/0xb80 [radeon] > > drm_ioctl_kernel+0xa0/0x160 > > drm_ioctl+0x2dc/0x4f0 > > radeon_drm_ioctl+0x80/0xf0 [radeon] > > new_sync_write+0x120/0x1c0 > > timerqueue_add+0x88/0x140 > > do_vfs_ioctl+0xe4/0x990 > > ksys_ioctl+0xdc/0x110 > > ksys_ioctl+0x78/0x110 > > sys_ioctl+0x2c/0x50 > > entSys+0xa0/0xc0 > > Please drop the backtrace, it doesn't add any value to the commit log. Ok, will drop it in v2. > > > ``` > > > > Obviously, the required order in this case is larger than MAX_ORDER. > > So, just use kvmalloc instead. > > > > Signed-off-by: Chen Li <chenli@xxxxxxxxxxxxx> > > Reviewed-by: Christian König <christian.koenig@xxxxxxx> > > The same patch should probably applied to amdgpu as well if we don't already use > kvmalloc there as well. > Fair enough, will add it into a v2 as a series with this patch. > Regards, > Christian. > > > --- > > drivers/gpu/drm/radeon/radeon_cs.c | 8 ++++---- > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c > > index 35e937d39b51..fb736ef9f9aa 100644 > > --- a/drivers/gpu/drm/radeon/radeon_cs.c > > +++ b/drivers/gpu/drm/radeon/radeon_cs.c > > @@ -288,7 +288,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data) > > p->chunk_relocs = NULL; > > p->chunk_flags = NULL; > > p->chunk_const_ib = NULL; > > - p->chunks_array = kcalloc(cs->num_chunks, sizeof(uint64_t), GFP_KERNEL); > > + p->chunks_array = kvmalloc_array(cs->num_chunks, sizeof(uint64_t), GFP_KERNEL); > > if (p->chunks_array == NULL) { > > return -ENOMEM; > > } > > @@ -299,7 +299,7 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data) > > } > > p->cs_flags = 0; > > p->nchunks = cs->num_chunks; > > - p->chunks = kcalloc(p->nchunks, sizeof(struct radeon_cs_chunk), GFP_KERNEL); > > + p->chunks = kvmalloc_array(p->nchunks, sizeof(struct radeon_cs_chunk), GFP_KERNEL); > > if (p->chunks == NULL) { > > return -ENOMEM; > > } > > @@ -452,8 +452,8 @@ static void radeon_cs_parser_fini(struct radeon_cs_parser *parser, int error, bo > > kvfree(parser->vm_bos); > > for (i = 0; i < parser->nchunks; i++) > > kvfree(parser->chunks[i].kdata); > > - kfree(parser->chunks); > > - kfree(parser->chunks_array); > > + kvfree(parser->chunks); > > + kvfree(parser->chunks_array); > > radeon_ib_free(parser->rdev, &parser->ib); > > radeon_ib_free(parser->rdev, &parser->const_ib); > > } > > -- > > 2.30.0 > > > > > > _______________________________________________ > > amd-gfx mailing list > > amd-gfx@xxxxxxxxxxxxxxxxxxxxx > > https://lists.freedesktop.org/mailman/listinfo/amd-gfx > > > Regards, Chen Li. _______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx