Re: ceph not replicating to all osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 28, 2016 at 1:00 AM, Ishmael Tsoaela <ishmaelt3@xxxxxxxxx> wrote:
> Hi ALL,
>
> Anyone can help with this issue would be much appreciated.
>
> I have created an  image on one client and mounted it on both 2 client I
> have setup.
>
> When I write data on one client, I cannot access the data on another client,
> what could be causing this issue?

I suspect you are talking about files showing up in a filesystem on
the rbd image you have
mounted on both clients? If so, you need to verify the chosen
filesystem supports that.

Let me know if I got this wrong (please provide a more detailed
description), or if you need
more information.

Cheers,
Brad

>
> root@nodeB:/mnt# ceph osd tree
> ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 1.81738 root default
> -2 0.90869     host nodeB
>  0 0.90869         osd.0       up  1.00000          1.00000
> -3 0.90869     host nodeC
>  1 0.90869         osd.1       up  1.00000          1.00000
>
>
> cluster_master@nodeC:/mnt$ ceph osd dump | grep data
> pool 1 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 128 pgp_num 128 last_change 17 flags hashpspool stripe_width
> 0
>
>
> cluster_master@nodeC:/mnt$ cat decompiled-crush-map.txt
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_local_fallback_tries 0
> tunable choose_total_tries 50
> tunable chooseleaf_descend_once 1
> tunable chooseleaf_vary_r 1
> tunable straw_calc_version 1
>
> # devices
> device 0 osd.0
> device 1 osd.1
>
> # types
> type 0 osd
> type 1 host
> type 2 chassis
> type 3 rack
> type 4 row
> type 5 pdu
> type 6 pod
> type 7 room
> type 8 datacenter
> type 9 region
> type 10 root
>
> # buckets
> host nodeB {
> id -2 # do not change unnecessarily
> # weight 0.909
> alg straw
> hash 0 # rjenkins1
> item osd.0 weight 0.909
> }
> host nodeC {
> id -3 # do not change unnecessarily
> # weight 0.909
> alg straw
> hash 0 # rjenkins1
> item osd.1 weight 0.909
> }
> root default {
> id -1 # do not change unnecessarily
> # weight 1.817
> alg straw
> hash 0 # rjenkins1
> item nodeB weight 0.909
> item nodeC weight 0.909
> }
>
> # rules
> rule replicated_ruleset {
> ruleset 0
> type replicated
> min_size 1
> max_size 10
> step take default
> step chooseleaf firstn 0 type host
> step emit
> }
>
> # end crush map
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Cheers,
Brad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux