can not get rbd cache perf counter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Recently,  I want to test performance benefit of rbd cache, i cannot get obvious performance benefit at my setup, then I  try to make sure rbd cache is enabled, but I cannot get rbd cache perf counter. In order to identify how to enable rbd cache perf counter, I setup a simple setup(one client hosted vms, one ceph cluster with two OSDs, each osd has a SSD partition for journal.), then build ceph-0.67.4.

 

My ceph.conf shows as bellows:

 

 

[global]

    debug default = 0

    log file = /var/log/ceph/$name.log

 

    max open files = 131072

 

    auth cluster required = none

    auth service required = none

    auth client required = none

    rbd cache = true

[mon.a]

    host = {monitor_host_name}

mon addr = {monitor_addr}

 

[osd.0]

    host = {osd.0_hostname}

    public addr = {public_addr}

    cluster addr = {cluster_addr}

    osd mkfs type = xfs

    devs = /dev/sdb1

    osd journal = /dev/sdd5

[osd.1]

    host = {osd.1_hostname}

    public addr = {public_addr}

    cluster addr = {cluster_addr}

    osd mkfs type = xfs

    devs = /dev/sdc1

    osd journal = /dev/sdd6

 

 

after ceph cluster is built, I create a rbd image with rbd create –size 10240 –new-format test

 

then use virsh to start a vm, below is my vm xml file

 

<domain type='qemu'>

  <name>test</name>

  <memory unit='KiB'>524288</memory>

  <currentMemory unit='KiB'>524288</currentMemory>

  <vcpu placement='static'>1</vcpu>

  <os>

    <type arch='x86_64' machine='pc-1.2'>hvm</type>

    <boot dev='hd'/>

  </os>

  <features>

    <acpi/>

    <apic/>

  </features>

  <cpu mode='host-model'>

    <model fallback='allow'/>

  </cpu>

  <clock offset='utc'>

    <timer name='pit' tickpolicy='delay'/>

    <timer name='rtc' tickpolicy='catchup'/>

  </clock>

  <on_poweroff>destroy</on_poweroff>

  <on_reboot>restart</on_reboot>

  <on_crash>destroy</on_crash>

  <devices>

    <emulator>/usr/bin/qemu-system-x86_64</emulator>

    <disk type='file' device='disk'>

      <driver name='qemu' type='qcow2' cache='none'/>

      <source file='/root/disk'/>

      <target dev='vda' bus='virtio'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>

    </disk>

     <disk type='network' device='disk'>

      <driver name='qemu' type='raw' cache='writeback'/>

      <source protocol='rbd' name='rbd/test:rbd_cache=true:rbd_cache_writethrough_until_flush=true'/>

      <target dev='vdb' bus='virtio'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>i

    </disk>

    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' keymap='en-us'>

      <listen type='address' address='0.0.0.0'/>

    </graphics>

  </devices>

</domain>

 

Then I add a rbd admin socket in ceph.conf on my client, below is the config

 

[global]

    auth cluster required = none

    auth service required = none

    auth client required = none

    rbd cache = true

    rbd cache writethrough until flush = true

[client]

    admin socket=/var/run/ceph/rbd-$pid.asok

[mon.a]

    host = {monitor_host_name}

    mon addr = {monitor_host_addr}

 

 

then I checked rbd cache perf counter by this socket, but the output did not get any rbd cache statistics

 

ceph --admin-daemon /var/run/ceph/rbd-3526.asok perf dump output

 

{ "objecter": { "op_active": 0,

      "op_laggy": 0,

      "op_send": 0,

      "op_send_bytes": 0,

      "op_resend": 0,

      "op_ack": 0,

      "op_commit": 0,

      "op": 0,

      "op_r": 0,

      "op_w": 0,

      "op_rmw": 0,

      "op_pg": 0,

      "osdop_stat": 0,

      "osdop_create": 0,

      "osdop_read": 0,

      "osdop_write": 0,

      "osdop_writefull": 0,

      "osdop_append": 0,

      "osdop_zero": 0,

      "osdop_truncate": 0,

      "osdop_delete": 0,

      "osdop_mapext": 0,

      "osdop_sparse_read": 0,

      "osdop_clonerange": 0,

      "osdop_getxattr": 0,

      "osdop_setxattr": 0,

      "osdop_cmpxattr": 0,

      "osdop_rmxattr": 0,

      "osdop_resetxattrs": 0,

      "osdop_tmap_up": 0,

      "osdop_tmap_put": 0,

      "osdop_tmap_get": 0,

      "osdop_call": 0,

      "osdop_watch": 0,

      "osdop_notify": 0,

      "osdop_src_cmpxattr": 0,

      "osdop_pgls": 0,

      "osdop_pgls_filter": 0,

      "osdop_other": 0,

      "linger_active": 0,

      "linger_send": 0,

      "linger_resend": 0,

      "poolop_active": 0,

      "poolop_send": 0,

      "poolop_resend": 0,

      "poolstat_active": 0,

      "poolstat_send": 0,

      "poolstat_resend": 0,

      "statfs_active": 0,

      "statfs_send": 0,

      "statfs_resend": 0,

      "command_active": 0,

      "command_send": 0,

      "command_resend": 0,

      "map_epoch": 0,

      "map_full": 0,

      "map_inc": 0,

      "osd_sessions": 0,

      "osd_session_open": 0,

      "osd_session_close": 0,

      "osd_laggy": 0},

  "throttle-msgr_dispatch_throttler-radosclient": { "val": 0,

      "max": 104857600,

      "get": 11,

      "get_sum": 5655,

      "get_or_fail_fail": 0,

      "get_or_fail_success": 0,

      "take": 0,

      "take_sum": 0,

      "put": 11,

      "put_sum": 5655,

      "wait": { "avgcount": 0,

          "sum": 0.000000000}},

  "throttle-objecter_bytes": { "val": 0,

      "max": 104857600,

      "get": 0,

      "get_sum": 0,

      "get_or_fail_fail": 0,

      "get_or_fail_success": 0,

      "take": 0,

      "take_sum": 0,

      "put": 0,

      "put_sum": 0,

      "wait": { "avgcount": 0,

          "sum": 0.000000000}},

  "throttle-objecter_ops": { "val": 0,

      "max": 1024,

      "get": 0,

      "get_sum": 0,

      "get_or_fail_fail": 0,

      "get_or_fail_success": 0,

      "take": 0,

      "take_sum": 0,

      "put": 0,

      "put_sum": 0,

      "wait": { "avgcount": 0,

          "sum": 0.000000000}}}

 

Qemu version:  qemu-system-x86_64 --version

QEMU emulator version 1.2.0 (qemu-kvm-1.2.0+noroms-0ubuntu2.12.10.5, Debian), Copyright (c) 2003-2008 Fabrice Bellard

 

Can anybody help me, any hints will be appreciated ?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux