Re: Solaris issues in 1.3.1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 5, 2009 at 1:02 PM, Jens Axboe<jens.axboe@xxxxxxxxxx> wrote:
> On Wed, Aug 05 2009, Jens Axboe wrote:
>> On Wed, Aug 05 2009, Chris Worley wrote:
>> > I'm running fio 1.3.1 in Solaris.  I'm seeing two issues:
>> >
>> > 1) Direct I/O throws an error.
>> > 2) Performance is way too high.
>> >
>> > W/o Direct I/O my size is set to 2.5x memory capacity to assure no
>> > caching... yet performance for read/write is up to >7x the theoretical
>> > speed of the storage device (i.e. it's reporting 7GB/s, when
>> > theoretical for the device is 1GB/s).
>> >
>> > I'm not seeing the reporting issue in 1.2.1.
>>
<snip>
>
> Oh, and if you are not, just wait and I'll double check things here. But
> please send me your job file(s) so I can reproduce more easily, thanks.

Sequential Reads and Writes seem most effected.  Random and random
mixes seem mostly unaffected, but I've seen them also go wild.  Here's
an example incantation:

fio --rw=write --bs=1m --numjobs=64 --iodepth=64 --sync=0
--randrepeat=0 --norandommap \
     --ioengine=sync --filename=/dev/<your device here> --name=test
--loops=1000 --size=128849018880 \
     --runtime=600 --group_reporting

Here's some example output (not from the above command, but a similar
command w/ 512KB blocks):

test: (g=0): rw=write, bs=512K-512K/512K-512K, ioengine=sync, iodepth=64
...
test: (g=0): rw=write, bs=512K-512K/512K-512K, ioengine=sync, iodepth=64
Starting 64 processes
Jobs: 1 (f=1): [_____________________________________________________W__________]
[1.6% done] [0K/0K /s] [0/0 iops] [eta 10h:29m:52s]     s]
test: (groupid=0, jobs=64): err= 0: pid=1222
  write: io=787299MiB, bw=1312MiB/s, iops=2623, runt=600153msec
    clat (usec): min=173, max=1107K, avg=24145.42, stdev=6786.14
    bw (KiB/s) : min=    0, max=347101, per=1.59%, avg=21306.67, stdev=4105.00
  cpu          : usr=0.04%, sys=3.31%, ctx=6406332, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=0/1574598, short=0/0
     lat (usec): 250=1.07%, 500=17.56%, 750=28.71%, 1000=4.53%
     lat (msec): 2=8.75%, 4=5.22%, 10=6.40%, 20=7.98%, 50=6.54%
     lat (msec): 100=2.02%, 250=10.58%, 500=0.64%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%

Run status group 0 (all jobs):
  WRITE: io=787299MiB, aggrb=1312MiB/s, minb=1312MiB/s,
maxb=1312MiB/s, mint=600153msec, maxt=600153msec
test: (g=0): rw=read, bs=512K-512K/512K-512K, ioengine=sync, iodepth=64
...
test: (g=0): rw=read, bs=512K-512K/512K-512K, ioengine=sync, iodepth=64
Starting 64 processes
Jobs: 1 (f=1): [_______________________________________________________________R]
[1.6% done] [2153M/0K /s] [4205/0 iops] [eta 10h:29m:59s]
test: (groupid=0, jobs=64): err= 0: pid=1290
  read : io=145295MiB, bw=7233MiB/s, iops=148, runt=600009msec
    clat (usec): min=381, max=1251K, avg=4390.26, stdev=2714.96
    bw (KiB/s) : min=  538, max=451584, per=1.55%, avg=114760.43, stdev=6304.82
  cpu          : usr=0.15%, sys=24.90%, ctx=4695895, majf=1, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w: total=8679198/0, short=0/0
     lat (usec): 500=0.01%, 750=0.02%, 1000=1.05%
     lat (msec): 2=89.40%, 4=3.57%, 10=1.86%, 20=0.86%, 50=1.29%
     lat (msec): 100=0.90%, 250=0.89%, 500=0.16%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%

Run status group 0 (all jobs):
   READ: io=145295MiB, aggrb=7233MiB/s, minb=7233MiB/s,
maxb=7233MiB/s, mint=600009msec, maxt=600009msec

Maybe if the "B" in "minb=7233MiB/s" meant "bits" it would be
correct... but then the "write" case (above) would be 5x too slow.

Thanks,

Chris
>
> --
> Jens Axboe
>
>
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux