Re: [PATCH v3 11/20] sg: replace rq array with lists

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019-08-12 10:35 a.m., Christoph Hellwig wrote:
On Wed, Aug 07, 2019 at 01:42:43PM +0200, Douglas Gilbert wrote:
Remove the fixed size array of 16 request elements per file
descriptor and replace with two linked lists per file descriptor.
One list is for active commands, the other list is a free list.
sg_request objects are now kept, available for re-use, until
their owning file descriptor is closed. The sg_request deletions
are now in sg_remove_sfp_usercontext(). Each active sg_request
has associated block request and scsi_request objects which are
released much earlier; their lifetime is the same as it was in
the v3 sg driver. The lifetime of the bio is also the same (but
is stretched in a later patch).

Willy would probably want you to use an xarray.

The new code spends little time scanning the active request
list so I see very little benefit from the O(1) random access
capability that I assume xarray would offer. It might be useful
if my code could access the active request list/array by pack_id.
That would require sparse array support, but even that is not
sufficient because the pack_id is user supplied and there has never
been a requirement in the sg driver that the pack_id be unique.
If they search responses by pack_id and its not unique then the
first found on the list is given back. That will typically be the
oldest.


BTW I have added a per file descriptor soft limit on the sum
of data-in and data-out buffers in use (sg_fd::sum_fd_dlens).
That is meant to trip up a malformed user space app (e.g. with
ioctl(SG_IOSUBMIT) in an infinite loop) from eating up all the
kernel's memory.

Add an enum for request state (sg_rq_state) and collect
various flags into bit maps: one for requests (SG_FRQ_*) and
the other for file descriptors (SG_FFD_*). They join a
per sg_device bit map (SG_FDEV_*) added in an earlier
patch.

Prior to a new sg_request object being (re-)built, information
that will be placed in it uses a new struct sg_comm_wr_t
object.

Since the above changes touch almost every function and low
level structures, this patch is big. With so many changes, the
diff utility that generates the patch sometimes loses track.

There seem to be a lot of rather unrelated changes here.  Also
please don't use __put_user/__get_user for that same reason that
drivers should not use __copy_to_user/__copy_from_user.


They are all in one place ***:

static int
sg_ctl_scsi_id(struct scsi_device *sdev, struct sg_fd *sfp, void __user *p)
{
        struct sg_scsi_id __user *sg_idp = p;
        struct scsi_lun lun8b;

        SG_LOG(3, sfp, "%s:    SG_GET_SCSI_ID\n", __func__);
        if (unlikely(!access_ok(p, sizeof(struct sg_scsi_id))))
                return -EFAULT;

        if (unlikely(SG_IS_DETACHING(sfp->parentdp)))
                return -ENODEV;
        __put_user((int)sdev->host->host_no,
                   &sg_idp->host_no);
        __put_user((int)sdev->channel, &sg_idp->channel);
        __put_user((int)sdev->id, &sg_idp->scsi_id);
        __put_user((int)sdev->lun, &sg_idp->lun);
        __put_user((int)sdev->type, &sg_idp->scsi_type);
        __put_user((short)sdev->host->cmd_per_lun,
                   &sg_idp->h_cmd_per_lun);
        __put_user((short)sdev->queue_depth,
                   &sg_idp->d_queue_depth);
        int_to_scsilun(sdev->lun, &lun8b);
        __copy_to_user(sg_idp->scsi_lun, lun8b.scsi_lun, 8);
        return 0;
}

So its an optimization with a (pretty small) risk that the user
space process disappears after the access_ok() and before the
final __copy_to_user() completes.

Obviously a better way is to build the output in a kernel based object
and copy_to_user() the lot in one shot.

Doug Gilbert


*** which you would now if you had been applying the patches to the
    in kernel driver as you went. Reviewing a big patch set (in this
    case: on about 3000 lines, changing half of them and ending up with
    4500 lines) without reference to the effect they have on the
    existing code (by applying them) is 'impressive'. Like doing
    analysis on a chess match when you can't see the board but instead
    have a long list of moves from a starting position that you only
    have a fuzzy recollection of.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux