Re: rbd in centos6.4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Raj,

 

which of these rpm version you’ve used on production machines.

 

Thanks again in advance.

 

Regards,

-ben

 

From: raj kumar [mailto:rajkumar600003@xxxxxxxxx]
Sent: Wednesday, September 18, 2013 6:09 AM
To: Aquino, BenX O
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: rbd in centos6.4

 

 

On Wed, Sep 18, 2013 at 4:32 AM, Aquino, BenX O <benx.o.aquino@xxxxxxxxx> wrote:

Hello Ceph Users Group,

Looking for rbd.ko  for Centos6.3_x64 (2.6.32) or Centos6.4_x64 (2.6.38).

Or point me to a buildable source or a rpm kernel package that has it.

 

Regards,

Ben

 

From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of raj kumar
Sent: Monday, August 26, 2013 11:04 AM
To: Kasper Dieter
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: rbd in centos6.4

 

Thank you so much. This is helpful not only for me, but for all beginners.

 

Raj

 

On Fri, Aug 23, 2013 at 5:31 PM, Kasper Dieter <dieter.kasper@xxxxxxxxxxxxxx> wrote:

Once the cluster is created on Ceph server nodes with MONs and OSDs on it
you have to copy the config + auth info to the clients:

#--- on server node, e.g.:
scp /etc/ceph/ceph.conf         client-1:/etc/ceph
scp /etc/ceph/keyring.bin       client-1:/etc/ceph
scp /etc/ceph/ceph.conf         client-2:/etc/ceph
scp /etc/ceph/keyring.bin       client-2:/etc/ceph

#--- on client node(s):
modprobe -v rbd
modprobe -v ceph        # only, if you want to run CephFS
rados lspools
rbd create -c /etc/ceph/ceph.conf  --size 1024000 --pool rbd    rbd-64k     --order 16 --keyring /etc/ceph/keyring.bin
rbd create -c /etc/ceph/ceph.conf  --size 1024000 --pool rbd    rbd-128k    --order 17 --keyring /etc/ceph/keyring.bin
rbd create -c /etc/ceph/ceph.conf  --size 1024000 --pool rbd    rbd-256k    --order 18 --keyring /etc/ceph/keyring.bin
rbd create -c /etc/ceph/ceph.conf  --size 1024000 --pool rbd    rbd-4m      --order 22 --keyring /etc/ceph/keyring.bin
rbd map rbd-64k
rbd map rbd-128k
rbd map rbd-256k
rbd map rbd-4m
rbd showmapped

id pool   image       snap device
5  rbd    rbd-64k     -    /dev/rbd5
6  rbd    rbd-128k    -    /dev/rbd6
7  rbd    rbd-256k    -    /dev/rbd7
8  rbd    rbd-4m      -    /dev/rbd8


Now, your application can direct access the Rados Block Devices /dev/rbdX

Regards,
-Dieter




On Fri, Aug 23, 2013 at 01:31:05PM +0200, raj kumar wrote:
>    Thank you Sir. I appreciate your help on this.
>    I upgraded the kernel to 3.4.53-8.
>    For second point, I want to give a client(which is not kvm) a block
>    storage. So without iscsi how the client will access the ceph cluster and
>    allocated block device.  and can you please let me know the flow to
>    provision the block storage. creating rbd image and map in one of the mon
>    host is right?  the ceph doc is not very clear on this.
>    Regards
>    Raj
>
>    On Fri, Aug 23, 2013 at 4:03 PM, Kasper Dieter

>    <[1]dieter.kasper@xxxxxxxxxxxxxx> wrote:
>
>      On Thu, Aug 22, 2013 at 03:32:35PM +0200, raj kumar wrote:
>      >    ceph cluster is running fine in centos6.4.
>      >    Now I would like to export the block device to client using rbd.
>      >    my question is,
>      >    1. I used to modprobe rbd in one of the monitor host. But I got
>      error,
>      >       FATAL: Module rbd not found
>      >       I could not find rbd module. How can i do this?
>
>      # cat /etc/centos-release
>      CentOS release 6.4 (Final)
>
>      # updatedb
>      # locate rbd.ko
>      /lib/modules/3.8.13/kernel/drivers/block/rbd.ko
>
>      # locate virtio_blk.ko
>      /lib/modules/2.6.32-358.14.1.el6.x86_64/kernel/drivers/block/virtio_blk.ko
>      /lib/modules/2.6.32-358.el6.x86_64/kernel/drivers/block/virtio_blk.ko
>      /lib/modules/3.8.13/kernel/drivers/block/virtio_blk.ko
>
>      Well, the standard CentOS-6.4 kernel does not include 'rbd.ko'.
>      For some reasons the 'Enterprise distros' (RHEL, SLES) disabled the Ceph
>      Kernel
>      components by default, although the CephFS (= ceph.ko) is in the
>      upstream Kernel
>      until 2.6.34, and the Block-Device (= rbd.ko) until 2.6.37.
>
>      We build our own Kernel 3.8.13 (a good mixture of recent & muture) and
>      put it into CentOS-6.4.
>      >    2. Once the rbd is created. Do we need to create iscsi target in
>      one of a
>      >    monitor host and present the lun to client. If so what if the
>      monitor host
>      >    goes down. so what is the best practice to provide a lun to
>      clients.
>      >    thanks
>      This depends on your Client.
>      Using
>        "RADOS - Block-Layer - RBD-Driver - iSCSI-TGT // iSCSI-INI - Client"
>      is a waste of stack overhead.
>      If the client is kvm-qemu you can use
>        "RADOS // librbd - kvm-qemu"
>      or
>        "RADOS // Block-Layer - RBD-Driver - Client"
>
>      The "//" symbolized the border between Server-nodes and client-nodes.
>
>      -Dieter
>
>      >    Raj
>
>      > _______________________________________________
>      > ceph-users mailing list

>      > [2]ceph-users@xxxxxxxxxxxxxx
>      > [3]http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> References
>
>    Visible links
>    1. mailto:dieter.kasper@xxxxxxxxxxxxxx
>    2. mailto:ceph-users@xxxxxxxxxxxxxx
>    3. http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux