Re: Strange osd in PG with new EC-Pool - pgs: 2 active+undersized+degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Assuming you've calculated the number of PGs reasonably, see here and here.  I’m guessing these will address your issue.  That weird number means that no OSD was found/assigned to the PG.

 

-don-

 

 

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Udo Lembke
Sent: 25 March, 2015 01:21
To: ceph-users@xxxxxxxx
Subject: Strange osd in PG with new EC-Pool - pgs: 2 active+undersized+degraded

 

Hi,

due to two more hosts (now 7 storage nodes) I want to create an new ec-pool and get an strange effect:

 

ceph@admin:~$ ceph health detail

HEALTH_WARN 2 pgs degraded; 2 pgs stuck degraded; 2 pgs stuck unclean; 2 pgs stuck undersized; 2 pgs undersized pg 22.3e5 is stuck unclean since forever, current state

active+undersized+degraded, last acting [76,15,82,11,57,29,2147483647]

pg 22.240 is stuck unclean since forever, current state

active+undersized+degraded, last acting [38,85,17,74,2147483647,10,58]

pg 22.3e5 is stuck undersized for 406.614447, current state

active+undersized+degraded, last acting [76,15,82,11,57,29,2147483647]

pg 22.240 is stuck undersized for 406.616563, current state

active+undersized+degraded, last acting [38,85,17,74,2147483647,10,58]

pg 22.3e5 is stuck degraded for 406.614566, current state

active+undersized+degraded, last acting [76,15,82,11,57,29,2147483647]

pg 22.240 is stuck degraded for 406.616679, current state

active+undersized+degraded, last acting [38,85,17,74,2147483647,10,58]

pg 22.3e5 is active+undersized+degraded, acting [76,15,82,11,57,29,2147483647] pg 22.240 is active+undersized+degraded, acting [38,85,17,74,2147483647,10,58]

 

But I have only 91 OSDs (84 Sata + 7 SSDs) not 2147483647!

Where the heck came the 2147483647 from?

 

I do following commands:

ceph osd erasure-code-profile set 7hostprofile k=5 m=2 ruleset-failure-domain=host ceph osd pool create ec7archiv 1024 1024 erasure 7hostprofile

 

my version:

ceph -v

ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e)

 

 

I found an issue in my crush-map - one SSD was twice in the map:

host ceph-061-ssd {

        id -16          # do not change unnecessarily

        # weight 0.000

        alg straw

        hash 0  # rjenkins1

}

root ssd {

        id -13          # do not change unnecessarily

        # weight 0.780

        alg straw

        hash 0  # rjenkins1

        item ceph-01-ssd weight 0.170

        item ceph-02-ssd weight 0.170

        item ceph-03-ssd weight 0.000

        item ceph-04-ssd weight 0.170

        item ceph-05-ssd weight 0.170

        item ceph-06-ssd weight 0.050

        item ceph-07-ssd weight 0.050

        item ceph-061-ssd weight 0.000

}

 

Host ceph-061-ssd don't excist and osd-61 is the SSD from ceph-03-ssd, but after fix the crusmap the issue with the osd 2147483647 still excist.

 

Any idea how to fix that?

 

regards

 

Udo

 

_______________________________________________

ceph-users mailing list

ceph-users@xxxxxxxxxxxxxx

https://urldefense.proofpoint.com/v1/url?u=http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com&k=8F5TVnBDKF32UabxXsxZiA%3D%3D%0A&r=klXZewu0kUquU7GVFsSHwpsWEaffmLRymeSfL%2FX1EJo%3D%0A&m=7L%2Bu4ghQ7Cz2ppDjpUHHs74BvxHqx4qrftnh0Jo1y68%3D%0A&s=4cbce863e3e10b02556b5b7be498e83c60fb4e16cf29235bb0a35dd2bb68828b


The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential information is not permitted unless such privilege is explicitly granted in writing by Quantum. Quantum reserves the right to have electronic communications, including email and attachments, sent across its networks filtered through anti virus and spam software programs and retain such messages in order to comply with applicable data security and retention requirements. Quantum is not responsible for the proper and complete transmission of the substance of this communication or for any delay in its receipt.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux