Hi Joachim, Understood on the risks. Aside from the alt. cluster, we have 3 other copies of the data outside of Ceph, so I feel pretty confident that it's a question of time to repopulate and not data loss. That said, I would be interested in your experience on what I'm trying to do if you've attempted something similar previously. Thanks, Marco On Sat, Dec 11, 2021 at 6:59 AM Joachim Kraftmayer (Clyso GmbH) < joachim.kraftmayer@xxxxxxxxx> wrote: > Hi Marco, > > to quote an old colleague, this is one of the ways to break a Ceph > cluster with its data. > > Perhaps the risks are not immediately visible in normal operation, but > in the event of a failure, the potential loss of data must be accepted. > > Regards, Joachim > > > ___________________________________ > > Clyso GmbH - ceph foundation member > > Am 10.12.21 um 18:04 schrieb Marco Pizzolo: > > Hello, > > > > As part of a migration process where we will be swinging Ceph hosts from > > one cluster to another we need to reduce the size from 3 to 2 in order to > > shrink the footprint sufficiently to allow safe removal of an OSD/Mon > node. > > > > The cluster has about 500M objects as per dashboard, and is about 1.5PB > in > > size comprised solely of small files served through CephFS to Samba. > > > > Has anyone encountered a similar situation? What (if any) problems did > you > > face? > > > > Ceph 14.2.22 bare metal deployment on Centos. > > > > Thanks in advance. > > > > Marco > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx