Problem with upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


I'm trying to upgrade our 3-monitor cluster from Centos 7 and Nautilus to
Rocky 9 and Quincy. This has been a very slow process of upgrading one
thing, running the cluster for a while, then upgrading the next thing. I
first upgraded to the last Centos 7 and upgraded to Octopus. That worked
fine. Then I was going to upgrade the OS to Rocky 9 while staying on
Octopus, but then found out that Octopus is not available for Rocky 9. So I
broke my own rule and upgraded one of the monitor (and manager) nodes to
Rocky 9 and Pacific, then rejoined it to the cluster. That seemed to work
just fine. Feeling bold, I upgraded the second monitor and manager node to
Rocky 9 and Pacific. That also seemed to work fine, with the cluster
showing all the monitors and managers running. But now, if I shut down the
last "Octopus" monitor, the cluster becomes unresponsive. This only happens
when I shut down the Octopus monitor. If I shut down one of the Pacific
monitors, the cluster keeps responding with the expected:
  "HEALTH_WARN 1/3 mons down"
and then goes back to normal when the monitor process is started again.

Is this expected? What am I missing? Thanks for any pointers!
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux