Re: how to enable rbd cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi mike, I enable rbd admin sockets according to  your suggestions, I add admin socket option in my ceph.conf, but in /var/run/ceph directory , there is no asok file, I used to nova to boot instances. Below is my steps to enable rbd admin socket. If there is something wrong, please let me know:

1: add rbd admin socket to /etc/ceph/ceph.conf, here is my ceph.conf on client hosts
    [global]
    log file = /var/log/ceph/$name.log
    max open files = 131072
    auth cluster required = none
    auth service required = none
    auth client required = none
    rbd cache = true
    debug perfcounter = 20
[client.volumes]
    admin socket=/var/run/ceph/rbd-$pid.asok
[mon.a]
    host = {monitor_host_name}
    mon addr = {monitor_host_addr}

2: modify in /etc/apparmor.d/abstractions/libvirt-qemu
    Add 
     # for rbd
     capability mknod,   
   
    # for rbd
  /etc/ceph/ceph.conf r,
  /var/log/ceph/* rw,
  /var/run/ceph/** rw,

Then restart libvirt-bin and nova-compute service

3: recreate nova instances and attach rbd, then execute 'dd if=/dev/zero of=/dev/vdb bs=64k', after that, check /var/run/ceph/rbd-$pid.asok socket, but it did not exist.

My ceph version was cuttlefish. Openstack is folsom.  Is there anything wired for you? Please let me know.
  
-----Original Message-----
From: Mike Dawson [mailto:mike.dawson@xxxxxxxxxxxx] 
Sent: Tuesday, November 26, 2013 12:41 AM
To: Shu, Xinxin
Cc: Gregory Farnum; Mark Nelson; ceph-users@xxxxxxxxxxxxxx
Subject: Re:  how to enable rbd cache

Greg is right, you need to enable RBD admin sockets. This can be a bit tricky though, so here are a few tips:

1) In ceph.conf on the compute node, explicitly set a location for the admin socket:

[client.volumes]
     admin socket = /var/run/ceph/rbd-$pid.asok

In this example, libvirt/qemu is running with permissions from ceph.client.volumes.keyring. If you use something different, adjust accordingly. You can put this under a more generic [client] section, but there are some downsides (like a new admin socket for each ceph cli command).

2) Watch for permissions issues creating the admin socket at the path you used above. For me, I needed to explicitly grant some permissions in /etc/apparmor.d/abstractions/libvirt-qemu, specifically I had to add:

   # for rbd
   capability mknod,

and

   # for rbd
   /etc/ceph/ceph.conf r,
   /var/log/ceph/* rw,
   /{,var/}run/ceph/** rw,

3) Be aware that if you have multiple rbd volumes attached to a single rbd image, you'll only get an admin socket to the volume mounted last. 
If you can set admin_socket via the libvirt xml for each volume, you can avoid this issue. This thread will explain better:

http://www.mail-archive.com/ceph-devel@xxxxxxxxxxxxxxx/msg16168.html

4) Once you get an RBD admin socket, query it like:

ceph --admin-daemon /var/run/ceph/rbd-29050.asok config show | grep rbd


Cheers,
Mike Dawson


On 11/25/2013 11:12 AM, Gregory Farnum wrote:
> On Mon, Nov 25, 2013 at 5:58 AM, Mark Nelson <mark.nelson@xxxxxxxxxxx> wrote:
>> On 11/25/2013 07:21 AM, Shu, Xinxin wrote:
>>>
>>> Recently , I want to enable rbd cache to identify performance 
>>> benefit. I add rbd_cache=true option in my ceph configure file, I 
>>> use 'virsh attach-device' to attach rbd to vm, below is my vdb xml file.
>>
>>
>> Ceph configuration files are a bit confusing because sometimes you'll 
>> see something like "rbd_cache" listed somewhere but in the ceph.conf 
>> file you'll want a space instead:
>>
>> rbd cache = true
>>
>> with no underscore.  That should (hopefully) fix it for you!
>
> I believe the config file will take either format.
>
> The RBD cache is a client-side thing, though, so it's not ever going 
> to show up in the OSD! You want to look at the admin socket created by 
> QEMU (via librbd) to see if it's working. :) -Greg -Greg
>
>>
>>>
>>> <disk type='network' device='disk'>
>>>
>>>         <driver name='qemu' type='raw' cache='writeback'/>
>>>
>>>         <source protocol='rbd'
>>>
>>> name='rbd/node12_2:rbd_cache=true:rbd_cache_writethrough_until_flush
>>> =true'/>
>>>
>>>         <target dev='vdb' bus='virtio'/>
>>>
>>>         <serial>6b5ff6f4-9f8c-4fe0-84d6-9d795967c7dd</serial>
>>>
>>>         <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
>>> function='0x0'/>i
>>>
>>> </disk>
>>>
>>> I do not know this is ok to enable rbd cache. I see perf counter for 
>>> rbd cache in source code, but when I used admin daemon to check rbd 
>>> cache statistics,
>>>
>>> Ceph -admin-daemon /var/run/ceph/ceph-osd.0.asok perf dump
>>>
>>> But I did not get any rbd cahce flags.
>>>
>>> My question is how to enable rbd cahce and check rbd cache perf 
>>> counter, or how can I make sure rbd cache is enabled, any tips will 
>>> be appreciated? Thanks in advanced.
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux