RFQ: mlx5: Sizing of completion EQs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey there,

>From IBTA:

C17-19: For deadlock prevention, the CA shall not continuously and permanently assert backpressure (i.e. fail to grant link credits).

My understanding is that CX-5, when an EQ becomes full, asserts back-pressure into the fabric, thus violating C17-19.

For a system with N CQs, if all of them is associated with the same EQ, at most N EQEs may be (attempted) written to the EQ.

I see that the mlx4 driver handles this "correctly" by sizing the synchronous, affiliated completion EQs, proportional with the number of CQs. From mlx4_init_eq_table():

	err = mlx4_create_eq(dev, dev->quotas.cq + MLX4_NUM_SPARE_EQE,
	[]


But, the mlx5 driver doesn't. It does in create_comp_eqs():

	nent = MLX5_COMP_EQ_SIZE;

which is hard-coded to 1024.

So, a CX-5 system with more that 1K CQs may violate C17-19.

What is the reason this is so different from the mlx4 driver?


Thxs, Håkon










[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux