Re: creating+incomplete issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have changed this line,

step chooseleaf firstn 0 type osd

the type from "host" to "osd".

Now the health looks fine:

$ ceph health
HEALTH_OK

Thanks for all the helps.


On 2015/10/29 星期四 10:35, Wah Peng wrote:
Hello,

this shows the content of crush-map file, what content should I change
for selecting osd instead of host? thanks in advance.

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host ceph2 {
         id -2           # do not change unnecessarily
         # weight 0.240
         alg straw
         hash 0  # rjenkins1
         item osd.0 weight 0.080
         item osd.1 weight 0.080
         item osd.2 weight 0.080
}
root default {
         id -1           # do not change unnecessarily
         # weight 0.240
         alg straw
         hash 0  # rjenkins1
         item ceph2 weight 0.240
}

# rules
rule replicated_ruleset {
         ruleset 0
         type replicated
         min_size 1
         max_size 10
         step take default
         step chooseleaf firstn 0 type host
         step emit
}

# end crush map


On 2015/10/29 星期四 10:03, Robert LeBlanc wrote:
A Google search should have lead you the rest of the way.

Follow this [1] and in the rule section on step choose leaf change host
to osd. You won't need to change the configuration this way, it is saved
in the CRUSH map.

[1]
http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map


Robert LeBlanc

Sent from a mobile device please excuse any typos.

On Oct 28, 2015 7:46 PM, "Wah Peng" <wah_peng@xxxxxxxxxxxx
<mailto:wah_peng@xxxxxxxxxxxx>> wrote:

    Is there a ceph sub-command existing instead of changing the config
    file? :)


    On 2015/10/29 星期四 9:24, Li, Chengyuan wrote:

        Try " osd crush chooseleaf type = 0" in
/etc/ceph/<clustername>.conf


        Regards,
        CY.

        -----Original Message-----
        From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx
        <mailto:ceph-users-bounces@xxxxxxxxxxxxxx>] On Behalf Of Wah Peng
        Sent: 2015年10月29日 9:14
        To: Robert LeBlanc
        Cc: Lindsay Mathieson; Gurjar, Unmesh; ceph-users@xxxxxxxxxxxxxx
        <mailto:ceph-users@xxxxxxxxxxxxxx>
        Subject: Re:  creating+incomplete issues

        wow this sounds hard to me. can you show the details?
        thanks a lot.


        On 2015/10/29 星期四 9:01, Robert LeBlanc wrote:

            You need to change the CRUSH map to select osd instead of
host.

            Robert LeBlanc

            Sent from a mobile device please excuse any typos.

            On Oct 28, 2015 7:00 PM, "Wah Peng" <wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>
            <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>>> wrote:

                  $ ceph osd tree
                  # id    weight  type name       up/down reweight
                  -1      0.24    root default
                  -2      0.24            host ceph2
                  0       0.07999                 osd.0   up      1
                  1       0.07999                 osd.1   up      1
                  2       0.07999                 osd.2   up      1


                  On 2015/10/29 星期四 8:55, Robert LeBlanc wrote:

                      Please paste 'ceph osd tree'.

                      Robert LeBlanc

                      Sent from a mobile device please excuse any typos.

                      On Oct 28, 2015 6:54 PM, "Wah Peng"
            <wah_peng@xxxxxxxxxxxx <mailto:wah_peng@xxxxxxxxxxxx>
                      <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>>
                      <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx> <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>>>>
                      wrote:

                           Hello,

                           Just did it, but still no good health. can
            you help? thanks.

                           ceph@ceph:~/my-cluster$ ceph osd stat
                                 osdmap e24: 3 osds: 3 up, 3 in

                           ceph@ceph:~/my-cluster$ ceph health
                           HEALTH_WARN 89 pgs degraded; 67 pgs
            incomplete; 67 pgs stuck
                           inactive; 192 pgs stuck unclean


                           On 2015/10/29 星期四 8:38, Lindsay Mathieson
            wrote:


                               On 29 October 2015 at 10:29, Wah Peng
                      <wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx> <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>>
                               <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>
                      <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>>>
                               <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>
                      <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>>
            <mailto:wah_peng@xxxxxxxxxxxx <mailto:wah_peng@xxxxxxxxxxxx>
                      <mailto:wah_peng@xxxxxxxxxxxx
            <mailto:wah_peng@xxxxxxxxxxxx>>>>>
                               wrote:

                                    $ ceph osd stat
                                          osdmap e18: 2 osds: 2 up, 2 in

                                    this is what it shows.
                                    does it mean I need to add up to 3
            osds? I just
                      use  the
                               default setup.


                               If you went with the defaults then your
            pool size will
                      be 3,
                               meaning it
                               needs 3 copies of the data (replica 3) to
            be valid - as
                      you only
                               have
                               two nodes/osd's that can never happen :)

                               Your options are:
                               - Add another node and osd.
                               or
                               - reduce the size to 2.(ceph osd set
            <poolname> size
            2)



                               --
                               Lindsay


_______________________________________________
                           ceph-users mailing list
            ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
            <mailto:ceph-users@xxxxxxxxxxxxxx
            <mailto:ceph-users@xxxxxxxxxxxxxx>>
                      <mailto:ceph-users@xxxxxxxxxxxxxx
            <mailto:ceph-users@xxxxxxxxxxxxxx>
                      <mailto:ceph-users@xxxxxxxxxxxxxx
            <mailto:ceph-users@xxxxxxxxxxxxxx>>>
            http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

        _______________________________________________
        ceph-users mailing list
        ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux