Re: [PATCH v2] scsi_debug: implement IMMED bit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 09, 2018 at 09:36:39PM -0500, Douglas Gilbert wrote:
> The Start Stop Unit (SSU) command takes in the order of a second to
> complete on some SAS SSDs and longer on hard disks. Synchronize Cache (SC)
> can also take some time. Both commands have an IMMED bit in their cdbs for
> those apps that don't want to wait. This patch introduces a long delay for
> those commands when the IMMED bit is clear.
> Since SC is a media access command then when the fake_rw option is active,
> its cdb processing is skipped and it returns immediately. The SSU command
> is not altered by the setting of the fake_rw option. These actions are
> not changed by this patch.
> 
> Changes since v1:
>   - clear the cdb mask of SYNCHRONIZE CACHE(16) cdb in byte 1, bit 0
> 
> Changes:
>   - add the SYNCHRONIZE CACHE(16) command
>   - together with the existing START STOP UNIT and SYNCHRONIZE CACHE(10)
>     commands process the IMMED bit in their cdbs
>   - if the IMMED bit is set, return immediately
>   - if the IMMED bit is clear, treat the delay parameter as having
>     a unit of one second
>   - in the SYNCHRONIZE CACHE processing do a bounds check

Hello Douglas,

I found this patch makes my test on scsi_debug much much slow, and
basically make scsi_debug not usable in sync IO related tests, such
as make partitions(parted), or 'dbench -s'.

For example:

1) scsi_debug:
  modprobe scsi_debug dev_size_mb=1024 max_queue=1

2) parted
- time taken by the following commands is increased from 1.3sec to 22.3 sec

	parted -m -s -a none $DISK mkpart primary 0MB 32MB &&
       parted -m -s -a none $DISK mkpart primary 32MB $DEV_SIZE

3) dbench(dbench -t 20 -s 64)
- write throughput is decreased from 38MB to 1.89MB 

Definitely it doesn't simulate an actual scsi device from performance
view.

IMO, this kind of simulation by completing SYNCHRONIZE_CACHE in unit of
second shouldn't be good since the actual completion time depends if
there is data cached in drive, or how much data is cached.

So is it possible to remove the very very slow response by doing that
only for 1/nth times?  For example, do long delay for every 10 or 20
SYNCHRONIZE_CACHE commands.

Or other approaches to avoid this issue?


Thanks,
Ming



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux