3.19.8-ckt10 -stable review patch. If anyone has any objections, please let me know. ------------------ From: Christophe Lombard <clombard@xxxxxxxxxxxxxxxxxx> commit 4108efb02daa09cbb5db048ada55a5b021b5183d upstream. The scheduled process area is currently allocated before assigning the correct maximum processes to the AFU, which will mean we only ever allocate a fixed number of pages for the scheduled process area. This will limit us to 958 processes with 2 x 64K pages. If we try to use more processes than that we'd probably overrun the buffer and corrupt memory or crash. AFUs that require three or more interrupts per process will not be affected as they are already limited to less processes than that, but we could hit it on an AFU that requires 0, 1 or 2 interrupts per process, or when using 4K pages. This patch moves the initialisation of the num_procs to before the SPA allocation so that enough pages will be allocated for the number of processes that the AFU supports. Signed-off-by: Christophe Lombard <clombard@xxxxxxxxxxxxxxxxxx> Signed-off-by: Ian Munsie <imunsie@xxxxxxxxxxx> Signed-off-by: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Signed-off-by: Kamal Mostafa <kamal@xxxxxxxxxxxxx> --- drivers/misc/cxl/native.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c index f2b37b4..84fd936 100644 --- a/drivers/misc/cxl/native.c +++ b/drivers/misc/cxl/native.c @@ -377,6 +377,7 @@ static int activate_afu_directed(struct cxl_afu *afu) dev_info(&afu->dev, "Activating AFU directed mode\n"); + afu->num_procs = afu->max_procs_virtualised; if (alloc_spa(afu)) return -ENOMEM; @@ -385,7 +386,6 @@ static int activate_afu_directed(struct cxl_afu *afu) cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L); afu->current_mode = CXL_MODE_DIRECTED; - afu->num_procs = afu->max_procs_virtualised; if ((rc = cxl_chardev_m_afu_add(afu))) return rc; -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html