RE: [PATCH v3 04/24] mpi3mr: add support of queue command processing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > +/**
> > + * mpi3mr_scmd_from_host_tag - Get SCSI command from host tag
> > + * @mrioc: Adapter instance reference
> > + * @host_tag: Host tag
> > + * @qidx: Operational queue index
> > + *
> > + * Identify the block tag from the host tag and queue index and
> > + * retrieve associated scsi command using scsi_host_find_tag().
> > + *
> > + * Return: SCSI command reference or NULL.
> > + */
> > +static struct scsi_cmnd *mpi3mr_scmd_from_host_tag(
> > +	struct mpi3mr_ioc *mrioc, u16 host_tag, u16 qidx) {
> > +	struct scsi_cmnd *scmd = NULL;
> > +	struct scmd_priv *priv = NULL;
> > +	u32 unique_tag = host_tag - 1;
> > +
> > +	if (WARN_ON(host_tag > mrioc->max_host_ios))
> > +		goto out;
> > +
> > +	unique_tag |= (qidx << BLK_MQ_UNIQUE_TAG_BITS);
> > +
> > +	scmd = scsi_host_find_tag(mrioc->shost, unique_tag);
> > +	if (scmd) {
> > +		priv = scsi_cmd_priv(scmd);
> > +		if (!priv->in_lld_scope)
> > +			scmd = NULL;
> > +	}
>
> That, I guess, is wrong.
>
> And 'real' unique tag (ie encoding the hwq num and the tag) only makes
> sense
> if you have _separate_ tag pools per queue.
> As your HBA supports only a single tag space _per HBA_ that would mean
> that
> you would have to _split_ that pool between hardware queues.

Hannes -

In current series, We have separate_ tag pools per queue.  There are two
stuffs in this driver/hw which is not matching with multiqueue support of
NVMe native.
1. Memory usage is too high and because of that we have segmented queue
support.
We are detecting queue full per operation queue which is unlikely event but
we wanted to keep it for some time. We will revisit this part once h/w
product goes under aggressive testing.
2. Whatever can_queue value H/W expose is not something per Operation queue,
but our h/w also do not break even though there are more than can_queue
command outstanding.
Actual outstanding per operation queue and controller wide in this h/w is
not very defined value, but it is too large (much higher higher than
can_queue) and we have considered this area to handle later.

> Which I don't think you do, as this would lead to a very imbalanced tag
> usage
> and ultimately a tag starvation on large systems.
> Hence each per-HWQ bitmap will cover the _full_ tag space, and the only
> way
> to make that work is to use shared hosttags.

In my initial study, I have noticed shared host tag is also giving similar
performance so we have plan to use shared host tag in future.
Doing this - We are strictly following can_queue level throttling in SML and
no matter how much max h/w really support.

Kashyap

>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke		        Kernel Storage Architect
> hare@xxxxxxx			               +49 911 74053 688
> SUSE Software Solutions Germany GmbH, 90409 Nürnberg
> GF: F. Imendörffer, HRB 36809 (AG Nürnberg)

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux