Re: [PATCH 5/8] sg: add free list, rework locking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/19/18 2:24 AM, Douglas Gilbert wrote:

> +	if (sfp->tot_fd_thresh)
> +		sfp->sum_fd_dlens += align_sz;
>
What lock protects sfp->sum_fd_dlens?


>  /*
> @@ -2216,38 +2401,85 @@ sg_add_request(struct sg_fd *sfp)
>   * data length exceeds rem_sgat_thresh then the data (or sgat) is
>   * cleared and the request is appended to the tail of the free list.
>   */
> -static int
> +static void
>  sg_remove_request(struct sg_fd *sfp, struct sg_request *srp)
>  {
> +	bool reserve;
>  	unsigned long iflags;
> -	int res = 0;
> +	const char *cp = "head";
> +	char b[64];
>  
> -	if (!sfp || !srp || list_empty(&sfp->rq_list))
> -		return res;
> +	if (WARN_ON(!sfp || !srp))
> +		return;
>  	write_lock_irqsave(&sfp->rq_list_lock, iflags);
> -	if (!list_empty(&srp->entry)) {
> -		list_del(&srp->entry);
> -		srp->parentfp = NULL;
> -		res = 1;
> -	}
> +	spin_lock(&srp->rq_entry_lck);
> +	/*
> +	 * N.B. sg_request object not de-allocated (freed). The contents of
> +	 * rq_list and rq_free_list lists are de-allocated (freed) when the
> +	 * owning file descriptor is closed. The free list acts as a LIFO.
> +	 * This can improve the chance of a cache hit when request is re-used.
> +	 */
> +	reserve = (sfp->reserve_srp == srp);
> +	if (reserve || srp->data.dlen <= sfp->rem_sgat_thresh) {
> +		list_del(&srp->rq_entry);
> +		if (srp->data.dlen > 0)
> +			list_add(&srp->free_entry, &sfp->rq_free_list);
> +		else {
> +			list_add_tail(&srp->free_entry, &sfp->rq_free_list);
> +			cp = "tail";
> +		}
> +		snprintf(b, sizeof(b), "%ssrp=0x%p move to fl %s",
> +			 (reserve ? "reserve " : ""), srp, cp);
> +	} else {
> +		srp->rq_state = SG_RQ_BUSY;
> +		list_del(&srp->rq_entry);
> +		spin_unlock(&srp->rq_entry_lck);
> +		write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
> +		if (sfp->sum_fd_dlens) {
> +			u32 uv = srp->data.dlen;
> +
> +			if (uv <= sfp->sum_fd_dlens)
> +				sfp->sum_fd_dlens -= uv;
> +			else {
> +				SG_LOG(2, sfp->parentdp,
> +				       "%s: logic error this dlen > %s\n",
> +				       __func__, "sum_fd_dlens");
> +				sfp->sum_fd_dlens = 0;
> +			}
> +		}
> +		sg_remove_sgat(srp);
> +		/* don't kfree(srp), move clear request to tail of fl */
> +		write_lock_irqsave(&sfp->rq_list_lock, iflags);
> +		spin_lock(&srp->rq_entry_lck);
> +		list_add_tail(&srp->free_entry, &sfp->rq_free_list);
> +		snprintf(b, sizeof(b), "clear sgat srp=0x%p move to fl tail",
> +			 srp);
> +	}
>
>
Here again, no lock protecting sfp->sum_fd_dlens.

Incidentally, I have been using my own home-grown target-mode SCSI
system for the past 16 years, but now I am starting to look into
switching to SCST.  I was just reading about their "SGV cache":

http://scst.sourceforge.net/scst_pg.html

It looks like it serves a similar purpose to what you are trying to
accomplish with recycling the indirect I/O buffers between different
requests.  Perhaps you can borrow some inspiration from them (or even
some code).

Tony Battersby
Cybernetics




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux