RE: [PATCH v2 11/15] megaraid_sas: Set device queue_depth same as HBA can_queue value in scsi-mq mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Shivasharan Srikanteshwara
> [mailto:shivasharan.srikanteshwara@xxxxxxxxxxxx]
> Sent: Monday, July 24, 2017 4:59 PM
> To: 'Christoph Hellwig'
> Cc: 'linux-scsi@xxxxxxxxxxxxxxx'; 'martin.petersen@xxxxxxxxxx';
> 'thenzl@xxxxxxxxxx'; 'jejb@xxxxxxxxxxxxxxxxxx'; Sumit Saxena;
> 'hare@xxxxxxxx'; Kashyap Desai
> Subject: RE: [PATCH v2 11/15] megaraid_sas: Set device queue_depth same
as
> HBA can_queue value in scsi-mq mode
>
> > -----Original Message-----
> > From: Christoph Hellwig [mailto:hch@xxxxxx]
> > Sent: Thursday, July 20, 2017 1:18 PM
> > To: Shivasharan Srikanteshwara
> > Cc: Christoph Hellwig; linux-scsi@xxxxxxxxxxxxxxx;
> > martin.petersen@xxxxxxxxxx; thenzl@xxxxxxxxxx;
> > jejb@xxxxxxxxxxxxxxxxxx; Sumit Saxena; hare@xxxxxxxx; Kashyap Desai
> > Subject: Re: [PATCH v2 11/15] megaraid_sas: Set device queue_depth
> > same as HBA can_queue value in scsi-mq mode
> >
> > I still don't understand why you don't want to do the same for the
non-mq
> path.
>
> Hi Christoph,
>
> Sorry for delay in response.
>
> MQ case -
> If there is any block layer requeue happens, we see performance drop. So
we
> avoid re-queue increasing Device QD = HBA QD. Performance drop due to
block
> layer re-queue is more in case of HDD (sequential IO converted into
random IO).
>
> Non-MQ case.
> If we increase Device QD = HBA QD for no-mq case, we see performance
drop
> for certain profiles.
> For example SATA SSD, previous driver in non-mq set Device QD=32. In
this case,
> if we have more outstanding IO per device (more than 32), block layer
attempts
> soft merge and eventually end user experience higher performance due to
block
> layer attempting soft merge.  Same is not correct in MQ case, as IO
scheduler in
> MQ adds overhead if at all there is any throttling or staging due to
device QD.
>
> Below is example of single SATA SSD, Sequential Read, BS=4K, IO depth =
256
>
> MQ enable, Device QD = 32 achieves 137K IOPS MQ enable, Device QD = 916
> (HBA QD) achieves 145K IOPS
>
> MQ disable, Device QD = 32 achieves 237K IOPS MQ disable, Device QD =
916
> (HBA QD) achieves 145K IOPS
>
> Ideally we want to keep same QD settings in non-MQ mode, but trying to
avoid
> as we may face some regression from end user as explained.
>
> Thanks,
> Shivasharan

Hi Christoph,
Can you please let us know your thoughts on this?
We understand that the settings should ideally be uniform across non-mq
and mq cases.
But based on the test results above, for non-mq case we are seeing some
difference in performance for certain IO profiles when compared to earlier
releases after increasing queue depth. Same is not the case when mq is
enabled.

Based on these results, we would like to keep this patch as is for this
phase.
We will run the further tests on this and will update for the next phase.

Thanks,
Shivasharan



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux