Hi, Am 2015-09-17 um 19:02 schrieb Stefan Eriksson: > I purged all nodes and did purgedata aswell and restarted, after this > Everything was fine. You are most certainly right, if anyone else have > this error, reinitialize the cluster might be the fastest way forward. Great that it worked for you, it didn't for me. The second installation of ceph on two nodes with 4 osds and I still oscillate between your original problem (with a default pool from installation that I cannot explain where it came from) and the too few PGs per OSD (0 < min 30 when I delete the default pool. I basically followed the procedure described here [1] and made some modifications to the config before calling 'ceph-deploy install' on my nodes. Here is the config I use (fsid and IPs deleted): <snip> [global] fsid = ID mon_initial_members = ceph1 mon_host = private-ip auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx filestore_xattr_use_omap = true public_network = public-network cluster_network = private-network osd_pool_default_size = 2 osd_pool_default_min_size = 1 osd_pool_default_pg_num = 150 osd_pool_default_pgp_num = 150 osd_crush_chooseleaf_type = 1 [osd] osd_journal_size = 10000 </snip> [1] http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ -- J.Hofmüller Ein literarisches Meisterwerk ist nur ein Wörterbuch in Unordnung. - Jean Cocteau
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com