Strange osd in PG with new EC-Pool - pgs: 2 active+undersized+degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
due to two more hosts (now 7 storage nodes) I want to create an new
ec-pool and get an strange effect:

ceph@admin:~$ ceph health detail
HEALTH_WARN 2 pgs degraded; 2 pgs stuck degraded; 2 pgs stuck unclean; 2
pgs stuck undersized; 2 pgs undersized
pg 22.3e5 is stuck unclean since forever, current state
active+undersized+degraded, last acting [76,15,82,11,57,29,2147483647]
pg 22.240 is stuck unclean since forever, current state
active+undersized+degraded, last acting [38,85,17,74,2147483647,10,58]
pg 22.3e5 is stuck undersized for 406.614447, current state
active+undersized+degraded, last acting [76,15,82,11,57,29,2147483647]
pg 22.240 is stuck undersized for 406.616563, current state
active+undersized+degraded, last acting [38,85,17,74,2147483647,10,58]
pg 22.3e5 is stuck degraded for 406.614566, current state
active+undersized+degraded, last acting [76,15,82,11,57,29,2147483647]
pg 22.240 is stuck degraded for 406.616679, current state
active+undersized+degraded, last acting [38,85,17,74,2147483647,10,58]
pg 22.3e5 is active+undersized+degraded, acting
[76,15,82,11,57,29,2147483647]
pg 22.240 is active+undersized+degraded, acting
[38,85,17,74,2147483647,10,58]

But I have only 91 OSDs (84 Sata + 7 SSDs) not 2147483647!
Where the heck came the 2147483647 from?

I do following commands:
ceph osd erasure-code-profile set 7hostprofile k=5 m=2
ruleset-failure-domain=host
ceph osd pool create ec7archiv 1024 1024 erasure 7hostprofile

my version:
ceph -v
ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e)


I found an issue in my crush-map - one SSD was twice in the map:
host ceph-061-ssd {
        id -16          # do not change unnecessarily
        # weight 0.000
        alg straw
        hash 0  # rjenkins1
}
root ssd {
        id -13          # do not change unnecessarily
        # weight 0.780
        alg straw
        hash 0  # rjenkins1
        item ceph-01-ssd weight 0.170
        item ceph-02-ssd weight 0.170
        item ceph-03-ssd weight 0.000
        item ceph-04-ssd weight 0.170
        item ceph-05-ssd weight 0.170
        item ceph-06-ssd weight 0.050
        item ceph-07-ssd weight 0.050
        item ceph-061-ssd weight 0.000
}

Host ceph-061-ssd don't excist and osd-61 is the SSD from ceph-03-ssd,
but after fix the crusmap the issue with the osd 2147483647 still excist.

Any idea how to fix that?

regards

Udo

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux