RE: pg rebalancing after taking osds out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I noticed that if I actually remove the osds from the crush map,
(after using ceph osd out), that everything works as I would expect.

So at the time of the behavior mentioned below, (without removing from crush map)
the tree looked something like the following:  Sorry, I don't have the pg state
saved from that time, I could recreate it if needed.

ID WEIGHT   TYPE NAME                         UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 37.98688 root default
-2 12.66229     host node-01
 0  1.80890         osd.0                          up  1.00000          1.00000
 1  1.80890         osd.1                          up  1.00000          1.00000
 2  1.80890         osd.2                          up  1.00000          1.00000
 3  1.80890         osd.3                          up  1.00000          1.00000
 4  1.80890         osd.4                          up  1.00000          1.00000
 5  1.80890         osd.5                          up        0          1.00000
 6  1.80890         osd.6                          up        0          1.00000
-3 12.66229     host node-02
 7  1.80890         osd.7                          up  1.00000          1.00000
 8  1.80890         osd.8                          up  1.00000          1.00000
 9  1.80890         osd.9                          up  1.00000          1.00000
10  1.80890         osd.10                         up  1.00000          1.00000
11  1.80890         osd.11                         up  1.00000          1.00000
12  1.80890         osd.12                         up        0          1.00000
13  1.80890         osd.13                         up        0          1.00000
-4 12.66229     host node-03
14  1.80890         osd.14                         up  1.00000          1.00000
15  1.80890         osd.15                         up  1.00000          1.00000
16  1.80890         osd.16                         up  1.00000          1.00000
17  1.80890         osd.17                         up  1.00000          1.00000
18  1.80890         osd.18                         up  1.00000          1.00000
19  1.80890         osd.19                         up        0          1.00000
20  1.80890         osd.20                         up        0          1.00000

-- Tom


> -----Original Message-----
> From: Sage Weil [mailto:sage@xxxxxxxxxxxx]
> Sent: Tuesday, January 26, 2016 4:44 PM
> To: Deneau, Tom
> Cc: ceph-devel@xxxxxxxxxxxxxxx
> Subject: Re: pg rebalancing after taking osds out
> 
> On Tue, 26 Jan 2016, Deneau, Tom wrote:
> > I have a replicated x2 pool with the crush step of host.
> > When the pool was created there were 3 hosts with 7 osds each, and
> > looking at the pg-by-pool for that pool I can see that every pg has
> > copies on two different hosts.
> >
> > Now I want to take 2 osds out of each node, which I did using the osd
> > out command. (So there are then 5 osds per host node).
> >
> > Now I rerun ceph pg ls-by-pool for that pool and it shows that some
> > pgs have both their copies on the same node.
> >
> > Is this normal?  My expectation was that each pg still had its two
> > copies on two different hosts.
> 
> There those PGs in the 'remapped' state?  What does the tree look like
> ('ceph osd tree')?
> 
> sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux