Re: how to enable rbd cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I seems that client admin socket has a life cycle, if any operation issued to rbd,  rbd admin socket appears in /var/run/ceph directory, however, when you use this admin socket to dump perf counter, rbd cache perf counter is not in the results:

The  output of ' ceph --admin-daemon /var/run/ceph/rbd-12856.asok perf dump '

{ "objecter": { "op_active": 0,
      "op_laggy": 0,
      "op_send": 0,
      "op_send_bytes": 0,
      "op_resend": 0,
      "op_ack": 0,
      "op_commit": 0,
      "op": 0,
      "op_r": 0,
      "op_w": 0,
      "op_rmw": 0,
      "op_pg": 0,
      "osdop_stat": 0,
      "osdop_create": 0,
      "osdop_read": 0,
      "osdop_write": 0,
      "osdop_writefull": 0,
      "osdop_append": 0,
      "osdop_zero": 0,
      "osdop_truncate": 0,
      "osdop_delete": 0,
      "osdop_mapext": 0,
      "osdop_sparse_read": 0,
      "osdop_clonerange": 0,
      "osdop_getxattr": 0,
      "osdop_setxattr": 0,
      "osdop_cmpxattr": 0,
      "osdop_rmxattr": 0,
      "osdop_resetxattrs": 0,
      "osdop_tmap_up": 0,
      "osdop_tmap_put": 0,
      "osdop_tmap_get": 0,
      "osdop_call": 0,
      "osdop_watch": 0,
      "osdop_notify": 0,
      "osdop_src_cmpxattr": 0,
      "osdop_pgls": 0,
      "osdop_pgls_filter": 0,
      "osdop_other": 0,
      "linger_active": 0,
      "linger_send": 0,
      "linger_resend": 0,
      "poolop_active": 0,
      "poolop_send": 0,
      "poolop_resend": 0,
      "poolstat_active": 0,
      "poolstat_send": 0,
      "poolstat_resend": 0,
      "statfs_active": 0,
      "statfs_send": 0,
      "statfs_resend": 0,
      "command_active": 0,
      "command_send": 0,
      "command_resend": 0,
      "map_epoch": 0,
      "map_full": 0,
      "map_inc": 0,
      "osd_sessions": 0,
      "osd_session_open": 0,
      "osd_session_close": 0,
      "osd_laggy": 0},
  "throttle-msgr_dispatch_throttler-radosclient": { "val": 0,
      "max": 104857600,
      "get": 14,
      "get_sum": 7540,
      "get_or_fail_fail": 0,
      "get_or_fail_success": 0,
      "take": 0,
      "take_sum": 0,
      "put": 14,
      "put_sum": 7540,
      "wait": { "avgcount": 0,
          "sum": 0.000000000}},
  "throttle-objecter_bytes": { "val": 0,
      "max": 104857600,
      "get": 0,
      "get_sum": 0,
      "get_or_fail_fail": 0,
      "get_or_fail_success": 0,
      "take": 0,
      "take_sum": 0,
      "put": 0,
      "put_sum": 0,
      "wait": { "avgcount": 0,
          "sum": 0.000000000}},
  "throttle-objecter_ops": { "val": 0,
      "max": 1024,
      "get": 0,
      "get_sum": 0,
      "get_or_fail_fail": 0,
      "get_or_fail_success": 0,
      "take": 0,
      "take_sum": 0,
      "put": 0,
      "put_sum": 0,
      "wait": { "avgcount": 0,
          "sum": 0.000000000}}}

The results did not include rbd cache perf counter, where in src/osdc/ObjectCacher.cc , definitely define rbd cache perf counter, below is a snapshot of rbd cache perf counter in src/osdc/ObjectCacher.cc

void ObjectCacher::perf_start()
{
  string n = "objectcacher-" + name;
  PerfCountersBuilder plb(cct, n, l_objectcacher_first, l_objectcacher_last);

  plb.add_u64_counter(l_objectcacher_cache_ops_hit, "cache_ops_hit");
  plb.add_u64_counter(l_objectcacher_cache_ops_miss, "cache_ops_miss");
  plb.add_u64_counter(l_objectcacher_cache_bytes_hit, "cache_bytes_hit");
  plb.add_u64_counter(l_objectcacher_cache_bytes_miss, "cache_bytes_miss");
  plb.add_u64_counter(l_objectcacher_data_read, "data_read");
  plb.add_u64_counter(l_objectcacher_data_written, "data_written");
  plb.add_u64_counter(l_objectcacher_data_flushed, "data_flushed");
  plb.add_u64_counter(l_objectcacher_overwritten_in_flush,
                      "data_overwritten_while_flushing");
  plb.add_u64_counter(l_objectcacher_write_ops_blocked, "write_ops_blocked");
  plb.add_u64_counter(l_objectcacher_write_bytes_blocked, "write_bytes_blocked");
  plb.add_time(l_objectcacher_write_time_blocked, "write_time_blocked");

  perfcounter = plb.create_perf_counters();
  cct->get_perfcounters_collection()->add(perfcounter);
}  

My question is how to get this rbd cache perf counter?  Any tips will be great appreciated.

-----Original Message-----
From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Shu, Xinxin
Sent: Tuesday, November 26, 2013 2:32 PM
To: Mike Dawson
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  how to enable rbd cache

Hi mike, I enable rbd admin sockets according to  your suggestions, I add admin socket option in my ceph.conf, but in /var/run/ceph directory , there is no asok file, I used to nova to boot instances. Below is my steps to enable rbd admin socket. If there is something wrong, please let me know:

1: add rbd admin socket to /etc/ceph/ceph.conf, here is my ceph.conf on client hosts
    [global]
    log file = /var/log/ceph/$name.log
    max open files = 131072
    auth cluster required = none
    auth service required = none
    auth client required = none
    rbd cache = true
    debug perfcounter = 20
[client.volumes]
    admin socket=/var/run/ceph/rbd-$pid.asok
[mon.a]
    host = {monitor_host_name}
    mon addr = {monitor_host_addr}

2: modify in /etc/apparmor.d/abstractions/libvirt-qemu
    Add 
     # for rbd
     capability mknod,   
   
    # for rbd
  /etc/ceph/ceph.conf r,
  /var/log/ceph/* rw,
  /var/run/ceph/** rw,

Then restart libvirt-bin and nova-compute service

3: recreate nova instances and attach rbd, then execute 'dd if=/dev/zero of=/dev/vdb bs=64k', after that, check /var/run/ceph/rbd-$pid.asok socket, but it did not exist.

My ceph version was cuttlefish. Openstack is folsom.  Is there anything wired for you? Please let me know.
  
-----Original Message-----
From: Mike Dawson [mailto:mike.dawson@xxxxxxxxxxxx]
Sent: Tuesday, November 26, 2013 12:41 AM
To: Shu, Xinxin
Cc: Gregory Farnum; Mark Nelson; ceph-users@xxxxxxxxxxxxxx
Subject: Re:  how to enable rbd cache

Greg is right, you need to enable RBD admin sockets. This can be a bit tricky though, so here are a few tips:

1) In ceph.conf on the compute node, explicitly set a location for the admin socket:

[client.volumes]
     admin socket = /var/run/ceph/rbd-$pid.asok

In this example, libvirt/qemu is running with permissions from ceph.client.volumes.keyring. If you use something different, adjust accordingly. You can put this under a more generic [client] section, but there are some downsides (like a new admin socket for each ceph cli command).

2) Watch for permissions issues creating the admin socket at the path you used above. For me, I needed to explicitly grant some permissions in /etc/apparmor.d/abstractions/libvirt-qemu, specifically I had to add:

   # for rbd
   capability mknod,

and

   # for rbd
   /etc/ceph/ceph.conf r,
   /var/log/ceph/* rw,
   /{,var/}run/ceph/** rw,

3) Be aware that if you have multiple rbd volumes attached to a single rbd image, you'll only get an admin socket to the volume mounted last. 
If you can set admin_socket via the libvirt xml for each volume, you can avoid this issue. This thread will explain better:

http://www.mail-archive.com/ceph-devel@xxxxxxxxxxxxxxx/msg16168.html

4) Once you get an RBD admin socket, query it like:

ceph --admin-daemon /var/run/ceph/rbd-29050.asok config show | grep rbd


Cheers,
Mike Dawson


On 11/25/2013 11:12 AM, Gregory Farnum wrote:
> On Mon, Nov 25, 2013 at 5:58 AM, Mark Nelson <mark.nelson@xxxxxxxxxxx> wrote:
>> On 11/25/2013 07:21 AM, Shu, Xinxin wrote:
>>>
>>> Recently , I want to enable rbd cache to identify performance 
>>> benefit. I add rbd_cache=true option in my ceph configure file, I 
>>> use 'virsh attach-device' to attach rbd to vm, below is my vdb xml file.
>>
>>
>> Ceph configuration files are a bit confusing because sometimes you'll 
>> see something like "rbd_cache" listed somewhere but in the ceph.conf 
>> file you'll want a space instead:
>>
>> rbd cache = true
>>
>> with no underscore.  That should (hopefully) fix it for you!
>
> I believe the config file will take either format.
>
> The RBD cache is a client-side thing, though, so it's not ever going 
> to show up in the OSD! You want to look at the admin socket created by 
> QEMU (via librbd) to see if it's working. :) -Greg -Greg
>
>>
>>>
>>> <disk type='network' device='disk'>
>>>
>>>         <driver name='qemu' type='raw' cache='writeback'/>
>>>
>>>         <source protocol='rbd'
>>>
>>> name='rbd/node12_2:rbd_cache=true:rbd_cache_writethrough_until_flush
>>> =true'/>
>>>
>>>         <target dev='vdb' bus='virtio'/>
>>>
>>>         <serial>6b5ff6f4-9f8c-4fe0-84d6-9d795967c7dd</serial>
>>>
>>>         <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
>>> function='0x0'/>i
>>>
>>> </disk>
>>>
>>> I do not know this is ok to enable rbd cache. I see perf counter for 
>>> rbd cache in source code, but when I used admin daemon to check rbd 
>>> cache statistics,
>>>
>>> Ceph -admin-daemon /var/run/ceph/ceph-osd.0.asok perf dump
>>>
>>> But I did not get any rbd cahce flags.
>>>
>>> My question is how to enable rbd cahce and check rbd cache perf 
>>> counter, or how can I make sure rbd cache is enabled, any tips will 
>>> be appreciated? Thanks in advanced.
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux