On 02/23/2018 02:05 PM, Sage Weil wrote:
On Fri, 23 Feb 2018, Wido den Hollander wrote:
Hi,
I reported this on ceph-users here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024885.html
It turns out that the balancer module mapped PGs to two OSDs on the same host.
I had one PG (1.41) which mapped to a lot of OSDs:
root@man:~# ceph osd dump|grep pg_upmap|grep 1.41
pg_upmap_items 1.41 [9,15,11,7,10,2]
root@man:~#
But using 'ceph pg map' I saw it eventually mapped to:
root at man:~# ceph pg map 1.41
osdmap e21543 pg 1.41 (1.41) -> up [15,7,4] acting [15,7,4]
root at man:~#
osd.15 and osd.4 are both on the same host.
This cluster is running v12.2.3 with the balancer enabled in 'upmap' mode.
The balancer module wasn't enabled prior to 12.2.3.
I searched the tracker for a issue but couldn't find one about it.
Is this a know issue?
I needed I have:
- OSDMap
- CRUSMap
Both from the situation when it was 'broken'.
Nope, it's not a known issue. Can you open a tracker ticket and
(if possible) attach or ceph-post-file the OSDMap?
Yes, I did, here you go: http://tracker.ceph.com/issues/23118
Wido
Thanks!
sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html