Re: High CPU consumption on Windows guest OS when libgfapi is used

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gluster volume configuration, those bold entries are the initial settings i have

Volume Name: g37test
Type: Stripe
Volume ID: 3f9dae3d-08f9-4321-aeac-67f44c7eb1ac
Status: Created
Number of Bricks: 1 x 10 = 10
Transport-type: tcp
Bricks:
Brick1: 192.168.123.4:/mnt/sdb_mssd/data
Brick2: 192.168.123.4:/mnt/sdc_mssd/data
Brick3: 192.168.123.4:/mnt/sdd_mssd/data
Brick4: 192.168.123.4:/mnt/sde_mssd/data
Brick5: 192.168.123.4:/mnt/sdf_mssd/data
Brick6: 192.168.123.4:/mnt/sdg_mssd/data
Brick7: 192.168.123.4:/mnt/sdh_mssd/data
Brick8: 192.168.123.4:/mnt/sdj_mssd/data
Brick9: 192.168.123.4:/mnt/sdm_mssd/data
Brick10: 192.168.123.4:/mnt/sdn_mssd/data
Options Reconfigured:
server.allow-insecure: on
storage.owner-uid: 165
storage.owner-gid: 165
performance.quick-read: off
performance.io-cache: off
performance.read-ahead: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
nfs.disable: true


Tried those suggested settings but Guest CPU utilization still high.

4k random write

IOPS 5054.46
Avg. response time (ms) 2.94
CPU utilization total (%) 96.09
CPU Privilegde time (%) 92.3

4k random read

4k random read
IOPS 24805.38
Avg. response time (ms) 0.6
CPU utilization total (%) 92.77
CPU Privilegde time (%) 89.33

1) disable write-behind, IOPS cannot go higher anymore due to guest CPU utilization, so will test this once i setup the ubuntu VM again.

2) This is what i got, capture during the Iometer test

# ps -ef | grep qemu-kvm
qemu     12184     1 99 00:59 ?        00:24:15 /usr/libexec/qemu-kvm -name instance-000000b6 -S -machine pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Haswell,+abm,+pdpe1gb,+rdrand,+f16c,+osxsave,+dca,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme,-rtm,-hle -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 0994b9e9-e356-459e-8ff8-c302576b3d7f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=12.0.1-1.el7,serial=423c213b-7142-4053-a119-d06ae77b432a,uuid=0994b9e9-e356-459e-8ff8-c302576b3d7f,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-000000b6/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster://ad11:24007/adtestOS/volume-2e04c86f-0225-41b6-aadc-67a3a72853d4,if=none,id=drive-virtio-disk0,format=raw,serial=2e04c86f-0225-41b6-aadc-67a3a72853d4,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=gluster://ad11:24007/adtestOS/volume-1d5c62af-09b0-44ea-b4e8-91d3a14859cc,if=none,id=drive-virtio-disk1,format=raw,serial=1d5c62af-09b0-44ea-b4e8-91d3a14859cc,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=gluster://ad11:24007/adtestOS/volume-7014729d-b6a4-4907-b50d-01f870eebb5e,if=none,id=drive-virtio-disk2,format=raw,serial=7014729d-b6a4-4907-b50d-01f870eebb5e,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk2,id=virtio-disk2 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:67:11:4a,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/0994b9e9-e356-459e-8ff8-c302576b3d7f/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 0.0.0.0:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on

# perf record -p 12184
^C[ perf record: Woken up 485 times to write data ]
[ perf record: Captured and wrote 125.231 MB perf.data (3252540 samples) ]

__memset_sse2  /usr/lib64/libc-2.17.so
  0.01 │       lea    next_state.7659+0x19c,%rcx
  0.01 │       movswq (%rcx,%r8,2),%rcx
  0.20 │       lea    (%rcx,%r11,1),%r11
  0.00 │     ↓ jmpq   fffffffffff77150
       │       nop
  0.00 │920:   cmp    $0x0,%r9
       │     ↑ je     8c0
       │     ↓ jmp    930
       │       nop
  0.08 │930:   lea    -0x80(%r8),%r8
  7.98 │       cmp    $0x80,%r8
  1.73 │       movntd %xmm0,(%rdi)
  7.91 │       movntd %xmm0,0x10(%rdi)
  4.16 │       movntd %xmm0,0x20(%rdi)
 39.22 │       movntd %xmm0,0x30(%rdi)
  7.35 │       movntd %xmm0,0x40(%rdi)
  0.98 │       movntd %xmm0,0x50(%rdi)
  2.15 │       movntd %xmm0,0x60(%rdi)
 23.22 │       movntd %xmm0,0x70(%rdi)
  1.72 │       lea    0x80(%rdi),%rdi
  0.04 │     ↑ jae    930
  0.00 │       sfence
  0.00 │       add    %r8,%rdi
  0.00 │       lea    __GI_memset+0x43d,%r11
  0.00 │       lea    next_state.7659+0x19c,%rcx
       │       movswq (%rcx,%r8,2),%rcx
  0.03 │       lea    (%rcx,%r11,1),%r11
  0.00 │     ↓ jmpq   fffffffffff77150
       │       nop
Press 'h' for help on key bindings

Thanks.

Cw



On Wed, Apr 20, 2016 at 7:18 AM, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:
On Tue, Apr 19, 2016 at 1:24 AM, qingwei wei <tchengwee@xxxxxxxxx> wrote:
> Hi Vijay,
>
> I rerun the test with gluster 3.7.11and found that the utilization still
> high when i use libgfapi. The write performance is also no good.
>
> Below are the info and results:
>
> Hypervisor host:
> libvirt 1.2.17-13.el7_2.4
> qemu-kvm 2.1.2-23.el7.1
>
>
> WIndows VM:
> Windows 2008R2
> IOmeter
>
> Ubuntu VM:
> Ubuntu 14.04
> fio 2.1.3
>
>
> Windows VM (libgfapi)
>
> 4k random read
>
> IOPS 22920.76
> Avg. response time (ms) 0.65
> CPU utilization total (%) 82.02
> CPU Privilegde time (%) 77.76
>
> 4k random write
>
> IOPS 4526.39
> Avg. response time (ms) 3.26
> CPU utilization total (%) 93.61
> CPU Privilegde time (%) 90.24
>
> Windows VM (fuse)
>
> 4k random read
>
> IOPS 14662.86
> Avg. response time (ms) 1.08
> CPU utilization total (%) 27.66
> CPU Privilegde time (%) 24.45
>
> 4k random write
>
> IOPS 16911.66
> Avg. response time (ms) 0.94
> CPU utilization total (%) 26.74
> CPU Privilegde time (%) 22.64
>
> Ubuntu VM (libgfapi)
>
> 4k random read
>
> IOPS 34364
> Avg. response time (ms) 0.46
> CPU utilization total (%) 6.09
>
> 4k random write
>
> IOPS 4531
> Avg. response time (ms) 3.53
> CPU utilization total (%) 1.2
>
> Ubuntu VM (fuse)
>
> 4k random read
>
> IOPS 17341
> Avg. response time (ms) 0.92
> CPU utilization total (%) 4.22
>
> 4k random write
>
> IOPS 17611
> Avg. response time (ms) 0.91
> CPU utilization total (%) 4.65
>
> Any comments on this or things i should try?
>

Can you please share your gluster volume configuration? It might be
worth checking if the tunables in profile virt
(extras/group-virt.example) are applied on this volume.

Additionally I would also try to:
1 disable write-behind in gluster to see if there is any peformance
difference for writes
2 use perf record -p <qemu> followed by perf annotate to observe the
hot threads.

HTH,
Vijay

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux