Could you share more details? Does ceph report inactive PGs when one
node is down? Please share:
ceph osd tree
ceph osd pool ls detail
ceph osd crush rule dump <rule of affected pool>
ceph pg ls-by-pool <affected pool>
ceph -s
Zitat von Murilo Morais <murilo@xxxxxxxxxxxxxx>:
Thanks for answering.
Marc, but there is no mechanism to prevent IO pause? At the moment I don't
worry about data loss.
I understand that putting it as replica x1 can work, but I need it to be x2.
Em qui., 13 de out. de 2022 às 12:26, Marc <Marc@xxxxxxxxxxxxxxxxx>
escreveu:
>
> I'm having strange behavior on a new cluster.
Not strange, by design
> I have 3 machines, two of them have the disks. We can name them like
> this:
> dcs1 to dcs3. The dcs1 and dcs2 machines contain the disks.
>
> I started bootstrapping through dcs1, added the other hosts and left mgr
> on
> dcs3 only.
>
> What is happening is that if I take down dcs2 everything hangs and
> becomes
> irresponsible, including the mount points that were pointed to dcs1.
You have to have disks in 3 machines. (Or set the replication to 1x)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx