XFS buffer IO performance is very poor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am run some test with fio on XFS, and I found that buffer IO is very poor. These are some result:


read(iops) write(iops)

direct IO + ext3 1848   1232

buffer IO + ext3 1976 1319

direct IO + XFS 1954 1304

buffer IO + XFS 307 203


I do not understand why such a big difference?ext3 is much better.

direct IO  parameters:

fio --filename=/data1/fio.dat —direct=1 --thread --rw=randrw --rwmixread=60  --ioengine=libaio --runtime=300  --iodepth=1 --size=40G --numjobs=32  -name=test_rw  --group_reporting --bs=16k —time_base 


buffer IO parametes:

fio --filename=/data1/fio.dat --direct=0 --thread --rw=randrw --rwmixread=60  --ioengine=libaio --runtime=300  --iodepth=1 --size=40G --numjobs=32  -name=test_rw  --group_reporting --bs=16k —time_base 


the system I've used for my tests:

HW server: 4 cores (Intel), 32GB RAM, running RHEL 6.5

Kernel: 2.6.32-431.el6.x86_64

storage: 10disks RAID1+0, stripe size: 256KB 


XFS format parametes:

#mkfs.xfs -d su=256k,sw=5 /dev/sdb1

#cat /proc/mounts

/dev/sdb1 /data1 xfs rw,noatime,attr2,delaylog,nobarrier,logbsize=256k,sunit=512,swidth=2560,noquota 0 0

#fdisk -ul
Device Boot      Start         End      Blocks   Id  System
/dev/sdb1             128  2929356359  1464678116   83  Linux


# fio --filename=/data1/fio.dat --direct=0 --thread --rw=randrw --rwmixread=60  --ioengine=libaio --runtime=300  --iodepth=1 --size=40G --numjobs=32  -name=test_rw  --group_reporting --bs=16k --time_base 
test_rw: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=libaio, iodepth=1
...
test_rw: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=libaio, iodepth=1
fio-2.0.13
Starting 32 threads
Jobs: 32 (f=32): [mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [100.0% done] [5466K/3644K/0K /s] [341 /227 /0  iops] [eta 00m:00s]
test_rw: (groupid=0, jobs=32): err= 0: pid=5711: Wed Feb 11 15:26:30 2015
  read : io=1442.2MB, bw=4922.3KB/s, iops=307 , runt=300010msec
    slat (usec): min=7 , max=125345 , avg=5765.52, stdev=3741.61
    clat (usec): min=0 , max=192 , avg= 2.72, stdev= 1.12
     lat (usec): min=7 , max=125348 , avg=5770.09, stdev=3741.68
    clat percentiles (usec):
     |  1.00th=[    1],  5.00th=[    2], 10.00th=[    2], 20.00th=[    2],
     | 30.00th=[    2], 40.00th=[    3], 50.00th=[    3], 60.00th=[    3],
     | 70.00th=[    3], 80.00th=[    3], 90.00th=[    3], 95.00th=[    4],
     | 99.00th=[    4], 99.50th=[    4], 99.90th=[   14], 99.95th=[   16],
     | 99.99th=[   20]
    bw (KB/s)  : min=   16, max=  699, per=3.22%, avg=158.37, stdev=85.79
  write: io=978736KB, bw=3262.4KB/s, iops=203 , runt=300010msec
    slat (usec): min=10 , max=577043 , avg=148215.93, stdev=125650.40
    clat (usec): min=0 , max=198 , avg= 2.50, stdev= 1.26
     lat (usec): min=11 , max=577048 , avg=148220.20, stdev=125650.94
    clat percentiles (usec):
     |  1.00th=[    1],  5.00th=[    1], 10.00th=[    1], 20.00th=[    2],
     | 30.00th=[    2], 40.00th=[    2], 50.00th=[    3], 60.00th=[    3],
     | 70.00th=[    3], 80.00th=[    3], 90.00th=[    3], 95.00th=[    3],
     | 99.00th=[    4], 99.50th=[    6], 99.90th=[   14], 99.95th=[   14],
     | 99.99th=[   17]
    bw (KB/s)  : min=   25, max=  448, per=3.17%, avg=103.28, stdev=46.76
    lat (usec) : 2=6.40%, 4=88.39%, 10=4.93%, 20=0.27%, 50=0.01%
    lat (usec) : 100=0.01%, 250=0.01%
  cpu          : usr=0.00%, sys=0.13%, ctx=238853, majf=18446744073709551520, minf=18446744073709278371
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=92296/w=61171/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=1442.2MB, aggrb=4922KB/s, minb=4922KB/s, maxb=4922KB/s, mint=300010msec, maxt=300010msec
  WRITE: io=978736KB, aggrb=3262KB/s, minb=3262KB/s, maxb=3262KB/s, mint=300010msec, maxt=300010msec

Disk stats (read/write):
  sdb: ios=89616/55141, merge=0/0, ticks=442611/171325, in_queue=613823, util=97.08%





_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux