RE: [PATCH V3 1/2] smartpqi: initial commit of Microsemi smartpqi driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Johannes Thumshirn [mailto:jthumshirn@xxxxxxx]
> Sent: Friday, June 17, 2016 4:57 AM
> To: Don Brace
> Cc: jejb@xxxxxxxxxxxxxxxxxx; Viswas G; Mahesh Rajashekhara;
> hch@xxxxxxxxxxxxx; Scott Teel; Kevin Barnett; Justin Lindley; Scott Benesh;
> elliott@xxxxxxx; linux-scsi@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH V3 1/2] smartpqi: initial commit of Microsemi smartpqi
> driver
> 
> EXTERNAL EMAIL
> 
> 
> Hi Don/Kevin,
> 
> some comments inline (mostly stulistic nothing architectural)
> 
> On Thu, Jun 16, 2016 at 05:48:16PM -0500, Don Brace wrote:
> > From: Kevin Barnett <kevin.barnett@xxxxxxxxxxxxx>
> >
> > This initial commit contains WIP of Microsemi's smartpqi module.
> >
> > Reviewed-by: Scott Benesh <scott.benesh@xxxxxxxxxxxxx>
> > Reviewed-by: Kevin Barnett <kevin.barnett@xxxxxxxxxxxxx>
> 
> if this is From: Kevin it should have his S-o-b as well.
> 
> > Signed-off-by: Don Brace <don.brace@xxxxxxxxxxxxx>

Done

> > ---
> 
> [...]
> 
> > + */
> > +
> > +#if !defined(_SMARTPQI_H)
> > +#define _SMARTPQI_H
> > +
> > +#pragma pack(1)
> 
> I know GCC does understand #pragma pack() to be compatible with MSVC
> but please
> use the kernel's __packed convention on structs you want to have packed.
> 

We prefer to leave this "as is" for these reasons:

We do not use __packed on other Microsemi Linux drivers.

There's not consistency in the usage of __packed. Just a quick survey of the kernel source reveals at least four styles:

Syntax #1:

struct xxx {
.
} __packed;

Syntax #2:

struct __packed xxx {
.
};

Syntax #3:
struct xxx {
.
} __attribute__((__packed__));

Syntax #4:

struct __attribute__((__packed__)) xxx {
.
};


> > +
> > +#define PQI_DEVICE_SIGNATURE "PQI DREG"
> 
> [...]
> 
> > +     pqi_index_t     oq_ci_copy;
> > +     struct task_struct *task;
> > +     u16             int_msg_num;
> > +};
> > +
> 
> Just for clarification (sorry haven't looked up the sepecs yet) is this a
> queue pair or more than two queues (looks like 3 to me)?
> 

There are 3 queues


> > +struct pqi_queue_group {
> > +     struct pqi_ctrl_info *ctrl_info;        /* backpointer */
> > +     u16             iq_id[2];
> > +     u16             oq_id;
> 
> [...]
> 
> > +#define RAID_CTLR_LUNID              "\0\0\0\0\0\0\0\0"
> > +
> > +struct pqi_scsi_dev_t {
> 
> For sake of consistency with the other structs "struct pqi_scsi_dev"?

Done

> 
> > +     int     devtype;                /* as reported by INQUIRY commmand */
> > +     u8      device_type;            /* as reported by */
> 
> [...]
> 
> > +
> > +/* Define the PCI IDs for the controllers that we support. */
> > +static const struct pci_device_id pqi_pci_id_table[] = {
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0110,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0600,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0601,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0602,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0603,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0650,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0651,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0652,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0653,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0654,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0655,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0700,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x0701,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0800,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0801,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0802,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0803,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0804,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0805,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0900,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0901,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0902,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0903,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0904,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0905,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .subdevice = 0x0906,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x1001,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x1100,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x1101,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x1102,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_VENDOR_ID_HP,
> > +             .subdevice = 0x1150,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     {
> > +             .vendor = PCI_VENDOR_ID_ADAPTEC2,
> > +             .device = 0x028f,
> > +             .subvendor = PCI_ANY_ID,
> > +             .subdevice = PCI_ANY_ID,
> > +             .class = 0,
> > +             .class_mask = 0,
> > +             .driver_data = 0
> > +     },
> > +     { 0 }
> > +};
> 
> 
> The above table can be drastically reduced by using the PCI_DEVICE_SUB()
> macro:
> static const struct pci_device_id pqi_pci_id_table[] = {
>         { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x28f,
> PCI_VENDOR_ID_ADAPTEC2, 0x0110) },
>         { PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x28f,
> PCI_VENDOR_ID_HP, 0x0600) },
>         ...
> 
> Also I think it makes more sense near the struct pci_driver way down, as
> that's the place it's actuall used.

Both done.

> 
> > +MODULE_DEVICE_TABLE(pci, pqi_pci_id_table);
> > +
> > +static struct {
> > +     atomic_t        num_requests[2];
> > +     atomic_t        num_raid_bypasses;
> > +     atomic_t        num_outstanding_requests[2];
> > +     atomic_t        num_passthru_ioctls;
> > +     atomic_t        num_heartbeat_events;
> > +     atomic_t        num_all_other_events;
> > +     atomic_t        num_scans;
> > +     atomic_t        num_interrupts;
> > +     atomic_t        num_spurious_interrupts;
> > +     atomic_t        num_alloc_io_request_collisions;
> > +     atomic_t        num_spans[2];
> > +     atomic_t        num_interrupts_wrong_cpu;
> > +} pqi_statistics = {
> > +     .num_requests[0] = ATOMIC_INIT(0),
> > +     .num_requests[1] = ATOMIC_INIT(0),
> > +     .num_raid_bypasses = ATOMIC_INIT(0),
> > +     .num_outstanding_requests[0] = ATOMIC_INIT(0),
> > +     .num_outstanding_requests[1] = ATOMIC_INIT(0),
> > +     .num_passthru_ioctls = ATOMIC_INIT(0),
> > +     .num_heartbeat_events = ATOMIC_INIT(0),
> > +     .num_all_other_events = ATOMIC_INIT(0),
> > +     .num_scans = ATOMIC_INIT(0),
> > +     .num_interrupts = ATOMIC_INIT(0),
> > +     .num_spurious_interrupts = ATOMIC_INIT(0),
> > +     .num_alloc_io_request_collisions = ATOMIC_INIT(0),
> > +     .num_spans[0] = ATOMIC_INIT(0),
> > +     .num_spans[1] = ATOMIC_INIT(0),
> > +     .num_interrupts_wrong_cpu = ATOMIC_INIT(0),
> > +};
> 
> Appart from Hannes' comments, I wonder if it really makes sense to have
> these
> statistics on a driver global basis, not per adapter?
> 

We removed the statistics


> > +
> > +static void pqi_take_ctrl_offline(struct pqi_ctrl_info *ctrl_info);
> 
> [...]
> 
> > +static ssize_t pqi_statistics_show(struct device *dev,
> > +     struct device_attribute *attr, char *buffer)
> > +{
> > +     ssize_t count = 0;
> > +     unsigned int num_raid_requests;
> > +     unsigned int num_aio_requests;
> > +     unsigned int num_outstanding_raid_requests = 0;
> > +     unsigned int num_outstanding_aio_requests = 0;
> > +     struct pqi_ctrl_info *ctrl_info;
> > +     struct Scsi_Host *shost;
> > +
> > +     shost = class_to_shost(dev);
> > +     ctrl_info = shost_to_hba(shost);
> > +
> > +     num_raid_requests =
> > +             atomic_read(&pqi_statistics.num_requests[RAID_PATH]);
> > +     num_aio_requests =
> atomic_read(&pqi_statistics.num_requests[AIO_PATH]);
> > +
> > +     num_outstanding_raid_requests =
> > +     atomic_read(&pqi_statistics.num_outstanding_requests[RAID_PATH]);
> > +     num_outstanding_aio_requests =
> > +     atomic_read(&pqi_statistics.num_outstanding_requests[AIO_PATH]);
> > +
> > +     if (pqi_ctrl_offline(ctrl_info))
> > +             count += snprintf(buffer + count, PAGE_SIZE - count,
> > +                     "*** CONTROLLER OFFLINE ***\n");
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "            total requests: %u\n",
> > +             num_raid_requests + num_aio_requests);
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "       total RAID requests: %u\n",
> > +             num_raid_requests);
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "        total AIO requests: %u\n",
> > +             num_aio_requests);
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "             RAID bypasses: %u\n",
> > +             atomic_read(&pqi_statistics.num_raid_bypasses));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "total outstanding requests: %u\n",
> > +             num_outstanding_raid_requests +
> num_outstanding_aio_requests);
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             " outstanding RAID requests: %u / %u\n",
> > +             num_outstanding_raid_requests,
> > +             ctrl_info->num_queue_groups *
> > +             (ctrl_info->num_elements_per_iq - 1));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "  outstanding AIO requests: %u / %u\n",
> > +             num_outstanding_aio_requests,
> > +             ctrl_info->num_queue_groups *
> > +             (ctrl_info->num_elements_per_iq - 1));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "        passthrough IOCTLs: %u\n",
> > +             atomic_read(&pqi_statistics.num_passthru_ioctls));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "          heartbeat events: %u\n",
> > +             atomic_read(&pqi_statistics.num_heartbeat_events));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "          all other events: %u\n",
> > +             atomic_read(&pqi_statistics.num_all_other_events));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "                scan count: %u\n",
> > +             atomic_read(&pqi_statistics.num_scans));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "                interrupts: %u\n",
> > +             atomic_read(&pqi_statistics.num_interrupts));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "       spurious interrupts: %u\n",
> > +             atomic_read(&pqi_statistics.num_spurious_interrupts));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "  alloc request collisions: %u\n",
> > +             atomic_read(&pqi_statistics.num_alloc_io_request_collisions));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "                RAID spans: %u\n",
> > +             atomic_read(&pqi_statistics.num_spans[RAID_PATH]));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "                 AIO spans: %u\n",
> > +             atomic_read(&pqi_statistics.num_spans[AIO_PATH]));
> > +
> > +     count += snprintf(buffer + count, PAGE_SIZE - count,
> > +             "   interrupts on wrong CPU: %u\n",
> > +             atomic_read(&pqi_statistics.num_interrupts_wrong_cpu));
> > +
> > +     return count;
> > +}
> 
> Wasn't the sysfs rule one entry per file?

We removed this code.

> 
> > +
> > +static ssize_t pqi_devices_show(struct device *dev,
> > +     struct device_attribute *attr, char *buffer)
> > +{
> 
> [...]
> 
> > +     rc = pqi_build_raid_path_request(ctrl_info, &request,
> > +             SA_CACHE_FLUSH, RAID_CTLR_LUNID, buffer,
> > +             SA_CACHE_FLUSH_BUFFER_LENGTH, 0, &pci_direction);
> > +     if (rc)
> > +             goto out;
> > +
> > +     rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header,
> > +             0, NULL, NO_TIMEOUT);
> 
> So if this is called via the shutdown() -> pqi_shutdown() path it /may/ block
> shutdown infinitely? I'm not shure this is what people want.

A timeout has been added.

> 
> > +
> > +     pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1,
> > +             pci_direction);
> > +
> > +out:
> > +     kfree(buffer);
> > +
> 
> [...]
> 
> > +static void pqi_complete_queued_requests(struct pqi_ctrl_info *ctrl_info,
> > +     struct pqi_scsi_dev_t *device_in_reset)
> > +{
> > +     unsigned int i;
> > +     unsigned int path;
> > +     struct pqi_queue_group *queue_group;
> > +     unsigned long flags;
> > +     struct pqi_io_request *io_request;
> > +     struct pqi_io_request *next;
> > +     struct scsi_cmnd *scmd;
> > +     struct pqi_scsi_dev_t *device;
> > +
> > +     for (i = 0; i < ctrl_info->num_queue_groups; i++) {
> > +             queue_group = &ctrl_info->queue_groups[i];
> > +
> > +             for (path = 0; path < 2; path++) {
> > +                     spin_lock_irqsave(&queue_group->submit_lock[path],
> > +                             flags);
> > +
> > +                     list_for_each_entry_safe(io_request, next,
> > +                             &queue_group->request_list[path],
> > +                             request_list_entry) {
> > +                             scmd = io_request->scmd;
> > +                             if (scmd) {
> > +                                     device = scmd->device->hostdata;
> > +                                     if (device == device_in_reset) {
> > +                                             set_host_byte(scmd, DID_RESET);
> > +                                             pqi_scsi_done(scmd);
> > +                                             list_del(&io_request->
> > +                                                     request_list_entry);
> > +                                     }
> > +                             }
> > +                     }
> > +
> > +                     spin_unlock_irqrestore(&queue_group->submit_lock[path],
> > +                             flags);
> > +             }
> 
> If you'd factor the inner for loop into an own small (inline) function, it
> could drastically improve readability here.
> 
> something like
> 
> static void pqi_complete_queue_group(struct pqi_queue_group *qg, struct
>                                      pqi_device_t *device_in_reset)
> {
>         unsigned int path;
>         unsigned long flags;
>         struct qpi_scsi_dev_t *dev;
>         struct pqi_io_request *ior, *next;
> 
>         for (path = 0; path < 2; path++) {
>                 spin_lock_irqsave(&qg->submit_lock[path], flags);
> 
>                 list_for_each_entry_save(ior, next, &qg->request_list[path],
>                                          request_list_entry) {
>                         scmd = ior->scmnd;
>                         if (!scmd)
>                                 continue;
> 
>                         device = scmnd->device->hostdata;
>                         if (dev == device_in_reset) {
>                                 set_host_byte(scmnd, DID_RESET);
>                                 pqi_scsi_done(scmnd);
>                                 list_del(&ior->request_list_entry);
>                         }
>                 }
>                 spin_unlock_irqresetore(&qg->submit_lock[path], flags);
>         }
> }
> 
> > +     }
> > +}
> > +
> 
> [..]
> 

Done

> > +
> > +static DEVICE_ATTR(sas_address, S_IRUGO, pqi_sas_address_show, NULL);
> > +static DEVICE_ATTR(ssd_smart_path_enabled, S_IRUGO,
> > +     pqi_ssd_smart_path_enabled_show, NULL);
> > +
> > +static struct device_attribute *pqi_sdev_attrs[] = {
> > +     &dev_attr_sas_address,
> > +     &dev_attr_ssd_smart_path_enabled,
> > +     NULL
> > +};
> > +
> > +static DEVICE_ATTR(rescan, S_IWUSR, NULL, pqi_host_store_rescan);
> > +static DEVICE_ATTR(stats, S_IRUGO, pqi_statistics_show, NULL);
> > +static DEVICE_ATTR(devices, S_IRUGO, pqi_devices_show, NULL);
> > +static DEVICE_ATTR(version, S_IRUGO, pqi_version_show, NULL);
> > +static DEVICE_ATTR(pqi, S_IRUGO, pqi_info_show, NULL);
> > +
> > +static struct device_attribute *pqi_shost_attrs[] = {
> > +     &dev_attr_rescan,
> > +     &dev_attr_stats,
> > +     &dev_attr_devices,
> > +     &dev_attr_version,
> > +     &dev_attr_pqi,
> > +     NULL
> > +};
> 
> This should be somewhere near the functions implementing the sysfs show
> callbacks.
> 
> > +
> > +static int pqi_change_queue_depth(struct scsi_device *sdev, int qdepth)
> > +{
> > +     struct pqi_scsi_dev_t *device = sdev->hostdata;
> 
> [...]
> 

Done


> --
> Johannes Thumshirn                                          Storage
> jthumshirn@xxxxxxx                                +49 911 74053 689
> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: Felix Imendörffer, Jane Smithard, Graham Norton
> HRB 21284 (AG Nürnberg)
> Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux