Hi Greg, At the moment our cluster is all in balance. We have one failed drive that will be replaced in a few days (the OSD has been removed from ceph and will be re-added with the replacement drive). I'll document the state of the PGs before the addition of the drive and during the recovery process and report back. We have a few pools, most are on 3 replicas now, some with non-critical data that we have elsewhere are on 2. But I've seen the degradation even on the 3 replica pools (I think in my original example there was an example of such a pool as well). Andras On 06/30/2017 04:38 PM, Gregory Farnum
wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com