Re: Ceph pool resize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Moving this to ceph-user

On Mon, Feb 6, 2017 at 3:51 PM, nigel davies <nigdav007@xxxxxxxxx> wrote:
> Hay
>
> I am helping to run an small ceph cluster 2 nodes set up.
>
> We have recently bought a 3rd storage node and the management want to
> increase the replication from two to three.
>
> As soon as i changed the pool size from 2 to 3, the cluster go's in to
> warning.
>
>      health HEALTH_WARN
>             512 pgs degraded
>             512 pgs stuck unclean
>             512 pgs undersized
>             recovery 5560/19162 objects degraded (29.016%)
>             election epoch 50, quorum 0,1
>      osdmap e243: 20 osds: 20 up, 20 in
>             flags sortbitwise
>       pgmap v79260: 2624 pgs, 3 pools, 26873 MB data, 6801 objects
>             54518 MB used, 55808 GB / 55862 GB avail
>             5560/19162 objects degraded (29.016%)
>                 2112 active+clean
>                  512 active+undersized+degraded
>
> The cluster is not recovering it self, any help would be grate full on this
>
>



-- 

Best Regards,

Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com  ||  http://community.redhat.com
@scuttlemonkey || @ceph
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux