Re: Fencing an entire client cluster from access to Ceph (in kubernetes)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A couple of scenarios we may need to consider for fencing.

* Fencing the workload of a namespace, when we want to move workload only for a namespace, not for
all namespace (the non-critical workload won't be moved to the secondary site but they need to be started again once the primary cluster is recovered). 
* Few applications which are critical in a namespace may need to be fenced(applications may be running on the different nodes) when they moved to the secondary site.
* We also need to revert back to the same state(unblocklist) once the primary cluster is recovered.
* Do we need to consider anything In the case of Async DR? if the primary cluster control plane is dead?

Or do we need to fence all the clients in the primary cluster in case of DR?
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux