three way replication on pool a failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Sebastien,

I am configuring ceph with 3 node storage cluster + one ceph admin.

I have few questions.

I have created pool name ' storage' with the replication size 3 on it and I have set the CRUSH rule  .

root at node1:/home/oss# ceph osd dump | grep -E 'storage'
pool 9 'storage' replicated size 3 min_size 1 crush_ruleset 3 object_hash rjenkins pg_num 8 pgp_num 8 last_change 160 flags hashpspool stripe_width 0

Note: command used to set replication size , #ceph osd pool set stotage size 3.

Even after setting replication size 3 , my data is not getting replicated on all the 3 nodes. 

Example:
root at Cephadmin:/home/oss# ceph osd map storage check1
osdmap e122 pool 'storage' (9) object 'check1' -> pg 9.7c9c5619 (9.1) -> up ([0,2,1], p0) acting ([0,2,1], p0)

but here if I shutdown my 2 nodes I will be unable to access data. In actual scenario I should be able to access / write data as my other 3rd node is up (if my understanding is correct). Please let me know where I am wrong.

Crush Map:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
type 11 pool

# buckets
host node2 {
        id -2           # do not change unnecessarily
        # weight 0.030
        alg straw
        hash 0  # rjenkins1
        item osd.0 weight 0.030
}
host node3 {
        id -3           # do not change unnecessarily
        # weight 0.030
        alg straw
        hash 0  # rjenkins1
        item osd.1 weight 0.030
}
host node1 {
        id -4           # do not change unnecessarily
        # weight 0.030
        alg straw
        hash 0  # rjenkins1
        item osd.2 weight 0.030
}
root default {
        id -1           # do not change unnecessarily
        # weight 0.090
        alg straw
        hash 0  # rjenkins1
        item node2 weight 0.030
        item node3 weight 0.030
        item node1 weight 0.030
}
pool storage {
        id -5           # do not change unnecessarily
        # weight 0.090
        alg straw
        hash 0  # rjenkins1
        item node2 weight 0.030
        item node3 weight 0.030
        item node1 weight 0.030
}

# rules
rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}

rule storage {
    ruleset 3
    type replicated
    min_size 1
    max_size 10
    step take storage
    step choose firstn 0 type osd
    step emit
}
# end crush map


root at node1:/home/oss# ceph osd tree
# id    weight  type name       up/down reweight
-5      0.09    pool storage
-2      0.03            host node2
0       0.03                    osd.0   up      1
-3      0.03            host node3
1       0.03                    osd.1   up      1
-4      0.03            host node1
2       0.03                    osd.2   up      1
-1      0.09    root default
-2      0.03            host node2
0       0.03                    osd.0   up      1
-3      0.03            host node3
1       0.03                    osd.1   up      1
-4      0.03            host node1
2       0.03                    osd.2   up      1

Refernce:  http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/


-----Original Message-----
From: Sebastien Han [mailto:sebastien.han@xxxxxxxxxxxx] 
Sent: Tuesday, September 16, 2014 7:43 PM
To: Channappa Negalur, M.
Cc: ceph-users at lists.ceph.com
Subject: Re: vdb busy error when attaching to instance

Did you follow this ceph.com/docs/master/rbd/rbd-openstack/ to configure your env?

On 12 Sep 2014, at 14:38, m.channappa.negalur at accenture.com wrote:

> Hello Team,
>  
> I have configured ceph as a multibackend for openstack.
>  
> I have created 2 pools .
> 1.       Volumes (replication size =3 )
> 2.       poolb     (replication size =2 )
>  
> Below is the details from /etc/cinder/cinder.conf
>  
> enabled_backends=rbd-ceph,rbd-cephrep
> [rbd-ceph]
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
> rbd_pool=volumes
> volume_backend_name=ceph
> rbd_user=volumes
> rbd_secret_uuid=34c88ed2-1cf6-446d-8564-f888934eec35
> volumes_dir=/var/lib/cinder/volumes
> [rbd-cephrep]
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
> rbd_pool=poolb
> volume_backend_name=ceph1
> rbd_user=poolb
> rbd_secret_uuid=d62b0df6-ee26-46f0-8d90-4ef4d55caa5b
> volumes_dir=/var/lib/cinder/volumes1
>  
> when I am attaching a volume to a instance I am getting "DeviceIsBusy: The supplied device (vdb) is busy" error.
>  
> Please let me know how to correct this..
>  
> Regards,
> Malleshi CN
> 
> 
> This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. 
> ______________________________________________________________________
> ________________
> 
> www.accenture.com
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Cheers.
----
S?bastien Han
Cloud Architect 

"Always give 100%. Unless you're giving blood."

Phone: +33 (0)1 49 70 99 72
Mail: sebastien.han at enovance.com
Address : 11 bis, rue Roqu?pine - 75008 Paris Web : www.enovance.com - Twitter : @enovance 



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux