Balancer module maps PG to OSDs on the same host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I reported this on ceph-users here: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024885.html

It turns out that the balancer module mapped PGs to two OSDs on the same host.

I had one PG (1.41) which mapped to a lot of OSDs:

root@man:~# ceph osd dump|grep pg_upmap|grep 1.41
pg_upmap_items 1.41 [9,15,11,7,10,2]
root@man:~#

But using 'ceph pg map' I saw it eventually mapped to:

root at man:~# ceph pg map 1.41
osdmap e21543 pg 1.41 (1.41) -> up [15,7,4] acting [15,7,4]
root at man:~#

osd.15 and osd.4 are both on the same host.

This cluster is running v12.2.3 with the balancer enabled in 'upmap' mode.

The balancer module wasn't enabled prior to 12.2.3.

I searched the tracker for a issue but couldn't find one about it.

Is this a know issue?

I needed I have:

- OSDMap
- CRUSMap

Both from the situation when it was 'broken'.

Wido
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux