> -----Original Message----- > From: linux-scsi-owner@xxxxxxxxxxxxxxx [mailto:linux-scsi- > owner@xxxxxxxxxxxxxxx] On Behalf Of Sreekanth Reddy > Sent: Tuesday, 09 December, 2014 6:17 AM > To: martin.petersen@xxxxxxxxxx; jejb@xxxxxxxxxx; hch@xxxxxxxxxxxxx ... > Change_set: > 1. Added affinity_hint varable of type cpumask_var_t in adapter_reply_queue > structure. And allocated a memory for this varable by calling > zalloc_cpumask_var. > 2. Call the API irq_set_affinity_hint for each MSIx vector to affiniate it > with calculated cpus at driver inilization time. > 3. While freeing the MSIX vector, call this same API to release the cpu > affinity mask > for each MSIx vector by providing the NULL value in cpumask argument. > 4. then call the free_cpumask_var API to free the memory allocated in step 2. > ... > diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c > b/drivers/scsi/mpt3sas/mpt3sas_base.c > index 1560115..f0f8ba0 100644 > --- a/drivers/scsi/mpt3sas/mpt3sas_base.c > +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c ... > @@ -1609,6 +1611,10 @@ _base_request_irq(struct MPT3SAS_ADAPTER *ioc, u8 > index, u32 vector) > reply_q->ioc = ioc; > reply_q->msix_index = index; > reply_q->vector = vector; > + > + if (!zalloc_cpumask_var(&reply_q->affinity_hint, GFP_KERNEL)) > + return -ENOMEM; I think this will create the problem Alex Thorlton just reported with lpfc on a system with a huge number (6144) of CPUs. See this thread: [BUG] kzalloc overflow in lpfc driver on 6k core system --- Rob Elliott HP Server Storage -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html