Re: Question about KVM IO performance with FreeBSD as a guest OS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 28, 2019 at 03:51:04PM +0200, rainer@xxxxxxxxxxxxxxx wrote:
> Am 2019-06-28 11:53, schrieb Stefan Hajnoczi:
> > On Sun, Jun 23, 2019 at 03:46:29PM +0200, Rainer Duffner wrote:
> on advice from my coworker, I created the image like this:
> 
> openstack image create --file ../freebsd-image/freebsd12_v1.41.qcow2
> --disk-format qcow2 --min-disk 6 --min-ram 512 --private --protected
> --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property
> hw_qemu_guest_agent=yes --property os_distro=freebsd --property
> os_version="12.0" "FreeBSD 12.0 amd 64 take3"
> 
> 
> This time, I got a bit better results:
> 
> 
> root@rdu5:~ # fio -filename=/srv/test2.fio_test_file -direct=1 -iodepth 4

I think iodepth has no effect here.  It applies to asynchronous I/O
engines like ioengine=libaio.  It's ignored for psync.

> -thread -rw=randrw -ioengine=psync -bs=4k -size 8G -numjobs=4 -runtime=60
> -group_reporting -name=pleasehelpme
> pleasehelpme: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
> 4096B-4096B, ioengine=psync, iodepth=4
> ...
> fio-3.13
> Starting 4 threads
> pleasehelpme: Laying out IO file (1 file / 8192MiB)
> Jobs: 4 (f=4): [m(4)][100.0%][r=1461KiB/s,w=1409KiB/s][r=365,w=352 IOPS][eta
> 00m:00s]
> pleasehelpme: (groupid=0, jobs=4): err= 0: pid=100120: Fri Jun 28 15:44:42
> 2019
>   read: IOPS=368, BW=1473KiB/s (1508kB/s)(86.3MiB/60005msec)
>     clat (usec): min=8, max=139540, avg=6534.89, stdev=5761.10
>      lat (usec): min=13, max=139548, avg=6542.68, stdev=5761.00
>     clat percentiles (usec):
>      |  1.00th=[   13],  5.00th=[   17], 10.00th=[   25], 20.00th=[ 1827],
>      | 30.00th=[ 3032], 40.00th=[ 4555], 50.00th=[ 5538], 60.00th=[ 6718],
>      | 70.00th=[ 8160], 80.00th=[10290], 90.00th=[13829], 95.00th=[17433],
>      | 99.00th=[25822], 99.50th=[28967], 99.90th=[37487], 99.95th=[40633],
>      | 99.99th=[51643]
>    bw (  KiB/s): min=  972, max= 2135, per=97.21%, avg=1430.93, stdev=55.37,
> samples=476
>    iops        : min=  242, max=  532, avg=356.10, stdev=13.86, samples=476
>   write: IOPS=373, BW=1496KiB/s (1532kB/s)(87.6MiB/60005msec)
>     clat (usec): min=13, max=46140, avg=4174.36, stdev=2834.86
>      lat (usec): min=19, max=46146, avg=4182.13, stdev=2835.08
>     clat percentiles (usec):
>      |  1.00th=[   40],  5.00th=[   90], 10.00th=[ 1012], 20.00th=[ 2008],
>      | 30.00th=[ 2474], 40.00th=[ 3097], 50.00th=[ 3949], 60.00th=[ 4555],
>      | 70.00th=[ 5145], 80.00th=[ 6063], 90.00th=[ 7439], 95.00th=[ 9110],
>      | 99.00th=[13435], 99.50th=[15401], 99.90th=[20055], 99.95th=[22152],
>      | 99.99th=[36439]
>    bw (  KiB/s): min=  825, max= 2295, per=97.26%, avg=1453.99, stdev=66.67,
> samples=476
>    iops        : min=  206, max=  572, avg=361.90, stdev=16.66, samples=476
>   lat (usec)   : 10=0.03%, 20=4.14%, 50=3.47%, 100=2.29%, 250=2.04%
>   lat (usec)   : 500=0.06%, 750=0.51%, 1000=0.71%
>   lat (msec)   : 2=7.38%, 4=22.88%, 10=44.07%, 20=10.86%, 50=1.55%
>   lat (msec)   : 100=0.01%, 250=0.01%
>   cpu          : usr=0.11%, sys=2.08%, ctx=83384, majf=0, minf=0
>   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
> >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>      issued rwts: total=22092,22436,0,0 short=0,0,0,0 dropped=0,0,0,0
>      latency   : target=0, window=0, percentile=100.00%, depth=4
> 
> Run status group 0 (all jobs):
>    READ: bw=1473KiB/s (1508kB/s), 1473KiB/s-1473KiB/s (1508kB/s-1508kB/s),
> io=86.3MiB (90.5MB), run=60005-60005msec
>   WRITE: bw=1496KiB/s (1532kB/s), 1496KiB/s-1496KiB/s (1532kB/s-1532kB/s),
> io=87.6MiB (91.9MB), run=60005-60005msec
> 
> 
> 
> Which is more or less half (or a third) of what I got on CentOS.

Are you using the exact same fio command-line on CentOS?

Have you tried virtio-blk instead of virtio-scsi?

Are you able to post the QEMU command-line from the host (ps aux | grep
qemu)?  Since --property os_distro=freebsd was used to create the guest
it's likely that the guest configuration is different from the CentOS
guest.  Let's compare the two QEMU command-lines.

Stefan

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux