PG degraded after settings OSDs out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

I have a test cluster of 12 OSDs, I deleted all pools then I set six of them out. After I created a Pool of 100 PG, I have the PGs stuck in creating or degraded state. Can you please advise. Does the Crush algo still taking the OSDs marked as down in consideration? Even if I have data shouldn’t the cluster rebalance by itself? I am using Ceph 0.94.2

 

ceph@Admin:~$ ceph osd tree

ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.11993 root default

-2 0.03998     host OSD0

0 0.00999         osd.0       up  1.00000          1.00000

3 0.00999         osd.3       up  1.00000          1.00000

6 0.00999         osd.6     down        0          1.00000

7 0.00999         osd.7     down        0          1.00000

-3 0.03998     host OSD1

1 0.00999         osd.1       up  1.00000          1.00000

4 0.00999         osd.4       up  1.00000          1.00000

8 0.00999         osd.8     down        0          1.00000

9 0.00999         osd.9     down        0          1.00000

-4 0.03998     host OSD2

2 0.00999         osd.2       up  1.00000          1.00000

5 0.00999         osd.5       up  1.00000          1.00000

10 0.00999         osd.10    down        0          1.00000

11 0.00999         osd.11    down        0          1.00000

 

ceph@Admin:~$ceph osd pool create bench 100

ceph@Admin:~$ ceph -w

    cluster 61167227-c16d-415d-b45d-5f0af1400e14

     health HEALTH_WARN

            54 pgs degraded

            54 pgs stuck degraded

            38 pgs stuck inactive

            67 pgs stuck unclean

            54 pgs stuck undersized

            54 pgs undersized

     monmap e1: 1 mons at {MON=10.0.0.208:6789/0}

            election epoch 1, quorum 0 MON

     osdmap e697: 12 osds: 6 up, 6 in

      pgmap v35990: 100 pgs, 1 pools, 0 bytes data, 0 objects

            312 MB used, 91781 MB / 92093 MB avail

                  33 active+clean

                  29 active+undersized+degraded

                  25 undersized+degraded+peered

                  13 creating

 

2015-07-07 17:08:40.326196 mon.0 [INF] pgmap v35990: 100 pgs: 29 active+undersized+degraded, 13 creating, 33 active+clean, 25 undersized+degraded+peered; 0 bytes data, 312 MB used, 91781 MB / 92093 MB avail

 

Regards,

Ali

This email and any attachments are intended solely for the use of the individual or entity to whom it is addressed and may be confidential and/or privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email and all attachments,

(iii) Dassault Systemes does not accept or assume any liability or responsibility for any use of or reliance on this email.

For other languages, go to http://www.3ds.com/terms/email-disclaimer

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux