Pools do not respond

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The PG in question isn't being properly mapped to any OSDs. There's a
good chance that those trees (with 3 OSDs in 2 hosts) aren't going to
map well anyway, but the immediate problem should resolve itself if
you change the "choose" to "chooseleaf" in your rules.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Thu, Jul 3, 2014 at 4:17 AM, Iban Cabrillo <cabrillo at ifca.unican.es> wrote:
> Hi folk,
>   I am following step by step the test intallation, and checking some
> configuration before try to deploy a production cluster.
>
>   Now I have a Health cluster with 3 mons + 4 OSDs.
>   I have created a pool with belonging all osd.x and two more one for two
> servers o the other for the other two.
>
>   The general pool work fine (I can create images and mount it on remote
> machines).
>
>   But the other two does not work (the commands rados put, or rbd ls "pool"
> hangs for ever).
>
>   this is the tree:
>
>    [ceph at cephadm ceph-cloud]$ sudo ceph osd tree
> # id weight type name up/down reweight
> -7 5.4 root 4x1GbFCnlSAS
> -3 2.7 host node04
> 1 2.7 osd.1 up 1
> -4 2.7 host node03
> 2 2.7 osd.2 up 1
> -6 8.1 root 4x4GbFCnlSAS
> -5 5.4 host node01
> 3 2.7 osd.3 up 1
> 4 2.7 osd.4 up 1
> -2 2.7 host node04
> 0 2.7 osd.0 up 1
> -1 13.5 root default
> -2 2.7 host node04
> 0 2.7 osd.0 up 1
> -3 2.7 host node04
> 1 2.7 osd.1 up 1
> -4 2.7 host node03
> 2 2.7 osd.2 up 1
> -5 5.4 host node01
> 3 2.7 osd.3 up 1
> 4 2.7 osd.4 up 1
>
>
> And this is the crushmap:
>
> ...
> root 4x4GbFCnlSAS {
>         id -6 #do not change unnecessarily
>         alg straw
>         hash 0  # rjenkins1
>         item node01 weight 5.400
>         item node04 weight 2.700
> }
> root 4x1GbFCnlSAS {
>         id -7 #do not change unnecessarily
>         alg straw
>         hash 0  # rjenkins1
>         item node04 weight 2.700
>         item node03 weight 2.700
> }
> # rules
> rule 4x4GbFCnlSAS {
>         ruleset 1
>         type replicated
>         min_size 1
>         max_size 10
>         step take 4x4GbFCnlSAS
>         step choose firstn 0 type host
>         step emit
> }
> rule 4x1GbFCnlSAS {
>         ruleset 2
>         type replicated
>         min_size 1
>         max_size 10
>         step take 4x1GbFCnlSAS
>         step choose firstn 0 type host
>         step emit
> }
> ......
> I of course set the crush_rules:
> sudo ceph osd pool set cloud-4x1GbFCnlSAS crush_ruleset 2
> sudo ceph osd pool set cloud-4x4GbFCnlSAS crush_ruleset 1
>
> but seems that are something wrong (4x4GbFCnlSAS.pool is 512MB file):
>    sudo rados -p cloud-4x1GbFCnlSAS put 4x4GbFCnlSAS.object
> 4x4GbFCnlSAS.pool
> !!HANGS for everrrrrrrrrrrrrrrrrrrr!
>
> from the ceph-client happen the same
>  rbd ls cloud-4x1GbFCnlSAS
>  !!HANGS for everrrrrrrrrrrrrrrrrrrr!
>
>
> [root at cephadm ceph-cloud]# ceph osd map cloud-4x1GbFCnlSAS
> 4x1GbFCnlSAS.object
> osdmap e49 pool 'cloud-4x1GbFCnlSAS' (3) object '4x1GbFCnlSAS.object' -> pg
> 3.114ae7a9 (3.29) -> up ([], p-1) acting ([], p-1)
>
> Any idea what i am doing wrong??
>
> Thanks in advance, I
> Bertrand Russell:
> "El problema con el mundo es que los est?pidos est?n seguros de todo y los
> inteligentes est?n llenos de dudas"
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux