2016-12-22 12:18 GMT+01:00 Henrik Korkuc <lists@xxxxxxxxx>:
On 16-12-22 13:12, Stéphane Klein wrote:
it says 1/3 OSDs are down. By default Ceph pools are setup with size 3. If your setup is same it will not be able to restore to normal status without size decrease or additional OSDsHEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs undersized; recovery 24/70 objects degraded (34.286%); too few PGs per OSD (28 < min 30); 1/3 in osds are down;
I have this config:
ceph_conf_overrides:
global:
osd_pool_default_size: 2
osd_pool_default_min_size: 1
ceph_conf_overrides:
global:
osd_pool_default_size: 2
osd_pool_default_min_size: 1
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com