Re: failed command: WRITE FPDMA QUEUED with Samsung 860 EVO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2019-01-03 at 22:24 +0000, Sitsofe Wheeler wrote:
> Hi,
> 
> On Thu, 3 Jan 2019 at 20:47, Laurence Oberman <loberman@xxxxxxxxxx>
> wrote:
> > 
> > Hello
> > 
> > I put the 860 in an enclosure (MSA50) driven by a SAS HBA
> > (megaraid)sas)
> > 
> > The backplane is SAS or SATA
> > 
> > /dev/sg2  0 0 49 0  0  /dev/sdb  ATA       Samsung SSD 860   1B6Q
> > 
> > Running the same fio test of yours on latest RHEL7 and 4.20.0+-1 I
> > am
> > unable to reproduce this issue of yours after multiple test runs.
> > 
> > Tests all run to completion with no errors on RHEL7 and upstream
> > kernels.
> > 
> > I have no way to test at the moment with a direct motherboard
> > connection to a SATA port so if this is a host side issue with sata
> > (ATA) I would not see it.
> > 
> > What this likely means is that the drive itself seems to be well
> > behaved here and the power or cable issue I alluded to earlier may
> > be
> > worth looking into for you or possibly the host ATA interface.
> > 
> > RHEL7 kernel
> > 3.10.0-862.11.1.el7.x86_64
> 
> Thanks for going the extra mile on this Laurence - it does sound like
> whatever issue I'm seeing with the 860 EVO is local to my box. It's
> curious that others are seeing something similar (e.g.
> https://github.com/zfsonlinux/zfs/issues/4873#issuecomment-449798356
> )
> but maybe they're in the same boat as me.
> 
> > test: (g=0): rw=randread, bs=(R) 32.0KiB-32.0KiB, (W) 32.0KiB-
> > 32.0KiB,
> > (T) 32.0KiB-32.0KiB, ioengine=libaio, iodepth=32
> > fio-3.3-38-gf5ec8
> > Starting 1 process
> > Jobs: 1 (f=1): [r(1)][100.0%][r=120MiB/s,w=0KiB/s][r=3839,w=0
> > IOPS][eta
> > 00m:00s]
> > test: (groupid=0, jobs=1): err= 0: pid=3974: Thu Jan  3 15:14:10
> > 2019
> >    read: IOPS=3827, BW=120MiB/s (125MB/s)(70.1GiB/600009msec)
> >     slat (usec): min=7, max=374, avg=23.78, stdev= 6.09
> >     clat (usec): min=449, max=509311, avg=8330.29, stdev=2060.29
> >      lat (usec): min=514, max=509331, avg=8355.00, stdev=2060.29
> >     clat percentiles (usec):
> >      |  1.00th=[ 5342],  5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[
> > 8291],
> >      | 30.00th=[ 8291], 40.00th=[ 8291], 50.00th=[ 8291], 60.00th=[
> > 8291],
> >      | 70.00th=[ 8356], 80.00th=[ 8356], 90.00th=[ 8455], 95.00th=[
> > 8848],
> >      | 99.00th=[11600], 99.50th=[13042], 99.90th=[16581],
> > 99.95th=[17695],
> >      | 99.99th=[19006]
> >    bw (  KiB/s): min=50560, max=124472, per=99.94%, avg=122409.89,
> > stdev=2592.08, samples=1200
> >    iops        : min= 1580, max= 3889, avg=3825.22, stdev=81.01,
> > samples=1200
> >   lat (usec)   : 500=0.01%, 750=0.03%, 1000=0.02%
> >   lat (msec)   : 2=0.08%, 4=0.32%, 10=97.20%, 20=2.34%, 50=0.01%
> >   lat (msec)   : 750=0.01%
> >   cpu          : usr=4.76%, sys=12.81%, ctx=2113947, majf=0,
> > minf=14437
> >   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%,
> > 32=100.0%,
> > > =64=0.0%
> > 
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%,
> > 64=0.0%,
> > > =64=0.0%
> > 
> >      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%,
> > 64=0.0%,
> > > =64=0.0%
> > 
> >      issued rwts: total=2296574,0,0,0 short=0,0,0,0 dropped=0,0,0,0
> >      latency   : target=0, window=0, percentile=100.00%, depth=32
> > 
> > Run status group 0 (all jobs):
> >    READ: bw=120MiB/s (125MB/s), 120MiB/s-120MiB/s (125MB/s-
> > 125MB/s),
> > io=70.1GiB (75.3GB), run=600009-600009msecmodinfo ata
> > 
> > Disk stats (read/write):
> >   sdb: ios=2295763/0, merge=0/0, ticks=18786069/0,
> > in_queue=18784356,
> > util=100.00%
> 
> For what it's worth, the speeds I see with NCQ off on the Samsung 860
> EVO are not far off what you're reporting (but are much lower than
> those I see on the MX500 in the same machine). I suppose it could
> just
> be the MX500 is simply a better performing SSD for the specific
> workload I have been testing...
> 
> --
> Sitsofe | http://sucs.org/~sits/

Hello Sitsofe

I am going to try tomorrow on a motherboard direct connection.
My testing was with no flags to libata, but of course ATA is hidden 
host wise in my test as I am going via megaraid_sas to the MSA50 shelf.

Are you using 32k blocks on the MX500 as well, is that 12gbit or 6gbit
SAS (The MX500)
Was it the same read tests via fio.

Thanks
Laurence





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux