Hi Zhang...
I think you are dealing with two different problems. The first problem refers to number of PGs per OSD. That was already discussed, and now there is no more messages concerning it. The second problem you are experiencing seems to be that all your OSDs are under the same host. Besides that, osd.0 appears twice in two different hosts (I do not really know why is that happening). If you are using the default crush rules, ceph is not able to replicate objects (even with size 2) across two different hosts because all your OSDs are just in one host. Cheers Goncalo From: Zhang Qiang [dotslash.lu@xxxxxxxxx]
Sent: 23 March 2016 23:17 To: Goncalo Borges Cc: Oliver Dzombic; ceph-users Subject: Re: Need help for PG problem And here's the osd tree if it matters.
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 22.39984 root default
-2 21.39984 host 10
0 1.06999 osd.0 up 1.00000 1.00000
1 1.06999 osd.1 up 1.00000 1.00000
2 1.06999 osd.2 up 1.00000 1.00000
3 1.06999 osd.3 up 1.00000 1.00000
4 1.06999 osd.4 up 1.00000 1.00000
5 1.06999 osd.5 up 1.00000 1.00000
6 1.06999 osd.6 up 1.00000 1.00000
7 1.06999 osd.7 up 1.00000 1.00000
8 1.06999 osd.8 up 1.00000 1.00000
9 1.06999 osd.9 up 1.00000 1.00000
10 1.06999 osd.10 up 1.00000 1.00000
11 1.06999 osd.11 up 1.00000 1.00000
12 1.06999 osd.12 up 1.00000 1.00000
13 1.06999 osd.13 up 1.00000 1.00000
14 1.06999 osd.14 up 1.00000 1.00000
15 1.06999 osd.15 up 1.00000 1.00000
16 1.06999 osd.16 up 1.00000 1.00000
17 1.06999 osd.17 up 1.00000 1.00000
18 1.06999 osd.18 up 1.00000 1.00000
19 1.06999 osd.19 up 1.00000 1.00000
-3 1.00000 host 148_96
0 1.00000 osd.0 up 1.00000 1.00000
On Wed, 23 Mar 2016 at 19:10 Zhang Qiang <dotslash.lu@xxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com