Sorry all: my company’s e-mail security got in the way there.
Try these references… ·
http://tracker.ceph.com/issues/10350 ·
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon -don- From:
ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Don Doerner Assuming you've calculated the number of PGs reasonably, see
here and
here. I’m guessing these will address your issue.
That weird number means that no OSD was found/assigned to the PG. -don- -----Original Message----- Hi, due to two more hosts (now 7 storage nodes) I want to create an new ec-pool and get an strange effect: ceph@admin:~$ ceph health detail HEALTH_WARN 2 pgs degraded; 2 pgs stuck degraded; 2 pgs stuck unclean; 2 pgs stuck undersized; 2 pgs undersized pg 22.3e5 is stuck unclean since forever, current state active+undersized+degraded, last acting [76,15,82,11,57,29,2147483647] pg 22.240 is stuck unclean since forever, current state active+undersized+degraded, last acting [38,85,17,74,2147483647,10,58] pg 22.3e5 is stuck undersized for 406.614447, current state active+undersized+degraded, last acting [76,15,82,11,57,29,2147483647] pg 22.240 is stuck undersized for 406.616563, current state active+undersized+degraded, last acting [38,85,17,74,2147483647,10,58] pg 22.3e5 is stuck degraded for 406.614566, current state active+undersized+degraded, last acting [76,15,82,11,57,29,2147483647] pg 22.240 is stuck degraded for 406.616679, current state active+undersized+degraded, last acting [38,85,17,74,2147483647,10,58] pg 22.3e5 is active+undersized+degraded, acting [76,15,82,11,57,29,2147483647] pg 22.240 is active+undersized+degraded, acting [38,85,17,74,2147483647,10,58] But I have only 91 OSDs (84 Sata + 7 SSDs) not 2147483647! Where the heck came the 2147483647 from? I do following commands: ceph osd erasure-code-profile set 7hostprofile k=5 m=2 ruleset-failure-domain=host ceph osd pool create ec7archiv 1024 1024 erasure 7hostprofile my version: ceph -v ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e) I found an issue in my crush-map - one SSD was twice in the map: host ceph-061-ssd { id -16
# do not change unnecessarily # weight 0.000 alg straw hash 0
# rjenkins1 } root ssd { id -13
# do not change unnecessarily # weight 0.780 alg straw hash 0
# rjenkins1 item ceph-01-ssd weight 0.170 item ceph-02-ssd weight 0.170 item ceph-03-ssd weight 0.000 item ceph-04-ssd weight 0.170 item ceph-05-ssd weight 0.170 item ceph-06-ssd weight 0.050 item ceph-07-ssd weight 0.050 item ceph-061-ssd weight 0.000 } Host ceph-061-ssd don't excist and osd-61 is the SSD from ceph-03-ssd, but after fix the crusmap the issue with the osd 2147483647 still excist. Any idea how to fix that? regards Udo _______________________________________________ ceph-users mailing list The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential
information is not permitted unless such privilege is explicitly granted in writing by Quantum. Quantum reserves the right to have electronic communications, including email and attachments, sent across its networks filtered through anti virus and spam software
programs and retain such messages in order to comply with applicable data security and retention requirements. Quantum is not responsible for the proper and complete transmission of the substance of this communication or for any delay in its receipt. |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com