RE: rbd map fail when the crushmap algorithm changed to tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Gregory:

OS: Ubuntu 12.04
kernel: 3.2.0-26
ceph: 0.48
filesystem : ext4

My step to assign new crush map
1. ceph osd getcrushmap -o curmap
2. crushtool -d curmap -o curmap.txt
3. modify the curmap.txt and rename to newmap.txt
4. service ceph -a stop  => destruct the cluster
5. mkcephfs -a -c ceph.conf --crushmap newmap
6. service ceph -a start
7. rbd map <image_name>

I do not find any error log in dmesg or /var/log/ceph/osd.log.  
It just hang at step 7.


-----Original Message-----
From: Gregory Farnum [mailto:greg@xxxxxxxxxxx] 
Sent: Saturday, July 07, 2012 12:59 AM
To: Eric YH Chen/WYHQ/Wiwynn
Cc: ceph-devel@xxxxxxxxxxxxxxx; Chris YT Huang/WYHQ/Wiwynn; Victor CY Chang/WYHQ/Wiwynn
Subject: Re: rbd map fail when the crushmap algorithm changed to tree

On Fri, Jul 6, 2012 at 12:27 AM,  <Eric_YH_Chen@xxxxxxxxxx> wrote:
> Hi all:
>
> Here is the original crushmap, I change the algorithm of host to "tree"
> and
> set back to ceph cluster. However, when I try to map one imge to
> rados block device (RBD), it would hang and no response until I press
> ctrl-c.
> ( rbd map xxxx  => then hang)
>
> Is there any wrong in the crushmap? Thanks for help.

Hmm, your crush map looks okay to me.

What are the versions of everything (cluster, rbd tool, kernel),
what's the exact command you run, and does it output anything? Is
there any information in dmesg?
-Greg


> =============================================
> # devices
> device 0 osd.0
> device 1 osd.1
> device 2 osd.2
> device 3 osd.3
> device 4 osd.4
> device 5 osd.5
> device 6 osd.6
> device 7 osd.7
> device 8 osd.8
> device 9 osd.9
> device 10 osd.10
> device 11 osd.11
>
> # types
> type 0 osd
> type 1 host
> type 2 rack
> type 3 row
> type 4 room
> type 5 datacenter
> type 6 pool
>
> # buckets
> host store-001 {
>         id -2           # do not change unnecessarily
>         # weight 12.000
>         alg straw
>         hash 0  # rjenkins1
>         item osd.0 weight 1.000
>         item osd.1 weight 1.000
>         item osd.10 weight 1.000
>         item osd.11 weight 1.000
>         item osd.2 weight 1.000
>         item osd.3 weight 1.000
>         item osd.4 weight 1.000
>         item osd.5 weight 1.000
>         item osd.6 weight 1.000
>         item osd.7 weight 1.000
>         item osd.8 weight 1.000
>         item osd.9 weight 1.000
> }
> rack unknownrack {
>         id -3           # do not change unnecessarily
>         # weight 12.000
>         alg straw
>         hash 0  # rjenkins1
>         item store-001 weight 12.000
> }
> pool default {
>         id -1           # do not change unnecessarily
>         # weight 12.000
>         alg straw
>         hash 0  # rjenkins1
>         item unknownrack weight 12.000
> }
>
> # rules
> rule data {
>         ruleset 0
>         type replicated
>         min_size 1
>         max_size 10
>         step take default
>         step choose firstn 0 type osd
>         step emit
> }
> rule metadata {
>         ruleset 1
>         type replicated
>         min_size 1
>         max_size 10
>         step take default
>         step choose firstn 0 type osd
>         step emit
> }
> rule rbd {
>         ruleset 2
>         type replicated
>         min_size 1
>         max_size 10
>         step take default
>         step choose firstn 0 type osd
>         step emit
> }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux