Hi Marco,
to quote an old colleague, this is one of the ways to break a Ceph
cluster with its data.
Perhaps the risks are not immediately visible in normal operation, but
in the event of a failure, the potential loss of data must be accepted.
Regards, Joachim
___________________________________
Clyso GmbH - ceph foundation member
Am 10.12.21 um 18:04 schrieb Marco Pizzolo:
Hello,
As part of a migration process where we will be swinging Ceph hosts from
one cluster to another we need to reduce the size from 3 to 2 in order to
shrink the footprint sufficiently to allow safe removal of an OSD/Mon node.
The cluster has about 500M objects as per dashboard, and is about 1.5PB in
size comprised solely of small files served through CephFS to Samba.
Has anyone encountered a similar situation? What (if any) problems did you
face?
Ceph 14.2.22 bare metal deployment on Centos.
Thanks in advance.
Marco
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx