Re: pg rebalancing after taking osds out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 26 Jan 2016, Deneau, Tom wrote:
> I have a replicated x2 pool with the crush step of host.
> When the pool was created there were 3 hosts with 7 osds each,
> and looking at the pg-by-pool for that pool I can see that
> every pg has  copies on two different hosts.
> 
> Now I want to take 2 osds out of each node, which I did using
> the osd out command. (So there are then 5 osds per host node).
> 
> Now I rerun ceph pg ls-by-pool for that pool and it shows that
> some pgs have both their copies on the same node.
> 
> Is this normal?  My expectation was that each pg still
> had its two copies on two different hosts.

There those PGs in the 'remapped' state?  What does the tree look like 
('ceph osd tree')?

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux