Le 17/12/2015 13:57, Loris Cuoghi a écrit :
Le 17/12/2015 13:52, Burkhard Linke a écrit :
Hi,
On 12/17/2015 01:41 PM, Dan Nica wrote:
And the osd tree:
$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 21.81180 root default
-2 21.81180 host rimu
0 7.27060 osd.0 up 1.00000 1.00000
1 7.27060 osd.1 up 1.00000 1.00000
2 7.27060 osd.2 up 1.00000 1.00000
the default CRUSH rulesets distribute PG replicates across hosts. With a
single host the rules are not able to find a second OSD for the
replicates.
Solutions:
- add a second host - or -
- change CRUSH ruleset to distribute based on OSDs instead of hosts.
Regards,
Burkhard
--
Dr. rer. nat. Burkhard Linke
Bioinformatics and Systems Biology
Justus-Liebig-University Giessen
35392 Giessen, Germany
Phone: (+49) (0)641 9935810
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Default replication count is now 3, so, 3 hosts are needed by default.
Lowering the pool's replication count (e.g. to 1) for testing purposes
is also a possibility.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> $ ceph osd dump | grep 'replicated size'
>
> pool 2 'data' replicated size 2 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 64 pgp_num 64 last_change 52 flags
hashpspool stripe_width 0
Sorry, didn't see that...
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com