> On Feb 23, 2018, at 9:13 PM, Saeed Mahameed <saeedm@xxxxxxxxxxxx> wrote: > >> On Thu, 2018-02-22 at 16:04 -0800, Santosh Shilimkar wrote: >> Hi Saeed >> >>> On 2/21/2018 12:13 PM, Saeed Mahameed wrote: >>> From: Yonatan Cohen <yonatanc@xxxxxxxxxxxx> >>> >>> The current implementation of create CQ requires contiguous >>> memory, such requirement is problematic once the memory is >>> fragmented or the system is low in memory, it causes for >>> failures in dma_zalloc_coherent(). >>> >>> This patch implements new scheme of fragmented CQ to overcome >>> this issue by introducing new type: 'struct mlx5_frag_buf_ctrl' >>> to allocate fragmented buffers, rather than contiguous ones. >>> >>> Base the Completion Queues (CQs) on this new fragmented buffer. >>> >>> It fixes following crashes: >>> kworker/29:0: page allocation failure: order:6, mode:0x80d0 >>> CPU: 29 PID: 8374 Comm: kworker/29:0 Tainted: G OE 3.10.0 >>> Workqueue: ib_cm cm_work_handler [ib_cm] >>> Call Trace: >>> [<>] dump_stack+0x19/0x1b >>> [<>] warn_alloc_failed+0x110/0x180 >>> [<>] __alloc_pages_slowpath+0x6b7/0x725 >>> [<>] __alloc_pages_nodemask+0x405/0x420 >>> [<>] dma_generic_alloc_coherent+0x8f/0x140 >>> [<>] x86_swiotlb_alloc_coherent+0x21/0x50 >>> [<>] mlx5_dma_zalloc_coherent_node+0xad/0x110 [mlx5_core] >>> [<>] ? mlx5_db_alloc_node+0x69/0x1b0 [mlx5_core] >>> [<>] mlx5_buf_alloc_node+0x3e/0xa0 [mlx5_core] >>> [<>] mlx5_buf_alloc+0x14/0x20 [mlx5_core] >>> [<>] create_cq_kernel+0x90/0x1f0 [mlx5_ib] >>> [<>] mlx5_ib_create_cq+0x3b0/0x4e0 [mlx5_ib] >>> >>> Signed-off-by: Yonatan Cohen <yonatanc@xxxxxxxxxxxx> >>> Reviewed-by: Tariq Toukan <tariqt@xxxxxxxxxxxx> >>> Signed-off-by: Leon Romanovsky <leon@xxxxxxxxxx> >>> Signed-off-by: Saeed Mahameed <saeedm@xxxxxxxxxxxx> >>> --- >> >> Jason mentioned about this patch to me off-list. We were >> seeing similar issue with SRQs & QPs. So wondering whether >> you have any plans to do similar change for other resouces >> too so that they don't rely on higher order page allocation >> for icm tables. >> > > Hi Santosh, > > Adding Majd, > > Which ULP is in question ? how big are the QPs/SRQs you create that > lead to this problem ? > > For icm tables we already allocate only order 0 pages: > see alloc_system_page() in > drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c > > But for kernel RDMA SRQ and QP buffers there is a place for > improvement. > > Majd, do you know if we have any near future plans for this. It’s in our plans to move all the buffers to use 0-order pages. Santosh, Is this RDS? Do you have persistent failure with some configuration? Can you please share more information? Thanks > >> Regards, >> Santosh ��.n��������+%������w��{.n�����{���fk��ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f