Majd, did you have target in days/weeks? I asked it because a current state is a order 8-9 allocation for any Lustre installations which use an MOFED 4.3 and i think vanilla kernel. So Lustre unusable after short time due memory fragmentation. > 21 июня 2018 г., в 23:16, Majd Dibbiny <majd@xxxxxxxxxxxx> написал(а): > > >> On Jun 21, 2018, at 9:06 PM, Leon Romanovsky <leon@xxxxxxxxxx> wrote: >> >>> On Thu, Jun 21, 2018 at 08:56:23PM +0300, Alexey Lyashkov wrote: >>> Majd, >>> >>> how soon? I have plan to work patch on > We are working on it.. Targeting 4.19 >>> next week, >>> i looks copy-paste of pointed Leon with some minor changes and applied to the different buffer. >>> >> >> Changing CQ was low hanging fruit because CQs are in multiple of >> CQEs (64bytes), for QPs you will need to take into account different >> sizes while allocating fragmented buffers. >> >> Thanks >> >>>> 21 июня 2018 г., в 14:20, Majd Dibbiny <majd@xxxxxxxxxxxx> написал(а): >>>> >>>> >>>>>> On Jun 21, 2018, at 1:14 PM, Leon Romanovsky <leon@xxxxxxxxxx> wrote: >>>>>> >>>>>> On Wed, Jun 20, 2018 at 10:48:16AM +0300, Alexey Lyashkov wrote: >>>>>> Hi All, >>>>>> >>>>>> while a lustre testing I have seen an very high ordered allocations was done by mlx5 driver. >>>>>> Similar bug reported agaist mlx4 driver in Lustre ticket https://jira.whamcloud.com/browse/LU-10736. >>>>>> As i see both fails related to the SGE array buffer allocation (sorry, i don’t know a good terms for it). >>>>>> But it uses a continues space, instead of fragmented. >>>>>> I have several questions about it >>>>>> What a reason for it? why don’t use a fragmented allocation and merge it logically as before for mlx4? >>>>> >>>>> I don't know the reasons for it, but we are working to avoid such large >>>>> allocations, for example see commit 88ca8be0037 "IB/mlx5: Implement fragmented >>>>> completion queue (CQ)" >>>> Soon we are going to do it for QPs of mlx5 and mlx4 as well. >>>>> >>>>> Thanks >>> -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html