Re: creating+incomplete issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A Google search should have lead you the rest of the way.

Follow this [1] and in the rule section on step choose leaf change host to osd. You won't need to change the configuration this way, it is saved in the CRUSH map.

[1] http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map

Robert LeBlanc

Sent from a mobile device please excuse any typos.

On Oct 28, 2015 7:46 PM, "Wah Peng" <wah_peng@xxxxxxxxxxxx> wrote:
Is there a ceph sub-command existing instead of changing the config file? :)


On 2015/10/29 星期四 9:24, Li, Chengyuan wrote:
Try " osd crush chooseleaf type = 0" in /etc/ceph/<clustername>.conf


Regards,
CY.

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Wah Peng
Sent: 2015年10月29日 9:14
To: Robert LeBlanc
Cc: Lindsay Mathieson; Gurjar, Unmesh; ceph-users@xxxxxxxxxxxxxx
Subject: Re: creating+incomplete issues

wow this sounds hard to me. can you show the details?
thanks a lot.


On 2015/10/29 星期四 9:01, Robert LeBlanc wrote:
You need to change the CRUSH map to select osd instead of host.

Robert LeBlanc

Sent from a mobile device please excuse any typos.

On Oct 28, 2015 7:00 PM, "Wah Peng" <wah_peng@xxxxxxxxxxxx
<mailto:wah_peng@xxxxxxxxxxxx>> wrote:

     $ ceph osd tree
     # id    weight  type name       up/down reweight
     -1      0.24    root default
     -2      0.24            host ceph2
     0       0.07999                 osd.0   up      1
     1       0.07999                 osd.1   up      1
     2       0.07999                 osd.2   up      1


     On 2015/10/29 星期四 8:55, Robert LeBlanc wrote:

         Please paste 'ceph osd tree'.

         Robert LeBlanc

         Sent from a mobile device please excuse any typos.

         On Oct 28, 2015 6:54 PM, "Wah Peng" <wah_peng@xxxxxxxxxxxx
         <mailto:wah_peng@xxxxxxxxxxxx>
         <mailto:wah_peng@xxxxxxxxxxxx <mailto:wah_peng@xxxxxxxxxxxx>>>
         wrote:

              Hello,

              Just did it, but still no good health. can you help? thanks.

              ceph@ceph:~/my-cluster$ ceph osd stat
                    osdmap e24: 3 osds: 3 up, 3 in

              ceph@ceph:~/my-cluster$ ceph health
              HEALTH_WARN 89 pgs degraded; 67 pgs incomplete; 67 pgs stuck
              inactive; 192 pgs stuck unclean


              On 2015/10/29 星期四 8:38, Lindsay Mathieson wrote:


                  On 29 October 2015 at 10:29, Wah Peng
         <wah_peng@xxxxxxxxxxxx <mailto:wah_peng@xxxxxxxxxxxx>
                  <mailto:wah_peng@xxxxxxxxxxxx
         <mailto:wah_peng@xxxxxxxxxxxx>>
                  <mailto:wah_peng@xxxxxxxxxxxx
         <mailto:wah_peng@xxxxxxxxxxxx> <mailto:wah_peng@xxxxxxxxxxxx
         <mailto:wah_peng@xxxxxxxxxxxx>>>>
                  wrote:

                       $ ceph osd stat
                             osdmap e18: 2 osds: 2 up, 2 in

                       this is what it shows.
                       does it mean I need to add up to 3 osds? I just
         use  the
                  default setup.


                  If you went with the defaults then your pool size will
         be 3,
                  meaning it
                  needs 3 copies of the data (replica 3) to be valid - as
         you only
                  have
                  two nodes/osd's that can never happen :)

                  Your options are:
                  - Add another node and osd.
                  or
                  - reduce the size to 2.(ceph osd set <poolname> size
2)



                  --
                  Lindsay

              _______________________________________________
              ceph-users mailing list
         ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
         <mailto:ceph-users@xxxxxxxxxxxxxx
         <mailto:ceph-users@xxxxxxxxxxxxxx>>
         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux