Re: When I shutdown one osd node, where can I see the block movement?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





2016-12-22 12:18 GMT+01:00 Henrik Korkuc <lists@xxxxxxxxx>:
On 16-12-22 13:12, Stéphane Klein wrote:
HEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs undersized; recovery 24/70 objects degraded (34.286%); too few PGs per OSD (28 < min 30); 1/3 in osds are down;

it says 1/3 OSDs are down. By default Ceph pools are setup with size 3. If your setup is same it will not be able to restore to normal status without size decrease or additional OSDs

I have this config:

ceph_conf_overrides:
   global:
      osd_pool_default_size: 2
      osd_pool_default_min_size: 1

see: https://github.com/harobed/poc-ceph-ansible/blob/master/vagrant-3mons-3osd/hosts/group_vars/all.yml#L11
 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux