Re: [PATCH 2/2] scsi: core: avoid to pre-allocate big chunk for sg list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 23, 2019 at 06:32:40PM +0800, Ming Lei wrote:
> big, the whole pre-allocation for sg list can consume huge memory.
> For example of lpfc, nr_hw_queues can be 70, each queue's depth
> can be 3781, so the pre-allocation for data sg list can be 70*3781*2k
> =517MB for single HBA.

We should probably limit the number of queues to something actually
useful, independent of your patch..

> +static bool scsi_use_inline_sg(struct scsi_cmnd *cmd)
> +{
> +	struct scatterlist *sg = (void *)cmd + sizeof(struct scsi_cmnd) +
> +		cmd->device->host->hostt->cmd_size;
> +
> +	return cmd->sdb.table.sgl == sg;
> +}

It might make more sense to have a helper to calculate the inline
sg address and use that for the comparism in scsi_mq_free_sgtables
and any other place that wants the address.

> +	if (cmd->sdb.table.nents && !scsi_use_inline_sg(cmd))
> +		sg_free_table_chained(&cmd->sdb.table, false);

This removes the last use of the first_chunk paramter to
sg_free_table_chained, please remove the paramter in an additional
patch.

> +	if (nr_segs <= SCSI_INLINE_SG_CNT)
> +		sdb->table.nents = sdb->table.orig_nents =
> +			SCSI_INLINE_SG_CNT;

Don't we need a sg_init_table here?

> +	else if (unlikely(sg_alloc_table_chained(&sdb->table, nr_segs,
> +					NULL)))
>  		return BLK_STS_RESOURCE;

We should probably also be able to drop the last parameter to
sg_alloc_table_chained now.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux