Hi Ashley, Thank you for the warning. I will not update to 15.2.2 atm. And yes, I did not get any email from Sebastian but its there in ceph list. I replied using email but i cannot see Sebastian's email address so im not sure if he seen my previous reply or not. I've sent mgr logs but i hope he see it soon and did not missed it. Thanks, Gencer. On 21.05.2020 20:25:03, Ashley Merrick <singapore@xxxxxxxxxxxxxx> wrote: Hello, Yes I did but wasn't able to suggest anything further to get around it, however: 1/ There is currently an issue with 15.2.2 so I would advise holding off any upgrade 2/ Another mail list user replied to one of your older emails in the thread asking for some manager logs not sure if you have seen this. Thanks ---- On Fri, 22 May 2020 01:21:26 +0800 gencer@xxxxxxxxxxxxx wrote ---- Hi Ashley, Have you seen my previous reply? If so, and no solution then does anyone has any idea how can this be done with 2 node? Thanks, Gencer. On 20.05.2020 16:33:53, Gencer W. Genç <gencer@xxxxxxxxxxxxx [mailto:gencer@xxxxxxxxxxxxx]> wrote: This is 2 node setup. I have no third node :( I am planning to add more in the future but currently 2 nodes only. At the moment, is there a --force command for such usage? On 20.05.2020 16:32:15, Ashley Merrick <singapore@xxxxxxxxxxxxxx [mailto:singapore@xxxxxxxxxxxxxx]> wrote: Correct, however it will need to stop one to do the upgrade leaving you with only one working MON (this is what I would suggest the error means seeing i had the same thing when I only had a single MGR), normally is suggested to have 3 MONs due to quorum. Do you not have a node you can run a mon for the few minutes to complete the upgrade? ---- On Wed, 20 May 2020 21:28:19 +0800 Gencer W. Genç <gencer@xxxxxxxxxxxxx [mailto:gencer@xxxxxxxxxxxxx]> wrote ---- I have 2 mons and 2 mgrs. cluster: id: 7d308992-8899-11ea-8537-7d489fa7c193 health: HEALTH_OK services: mon: 2 daemons, quorum vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s) mgr: vx-rg23-rk65-u43-130.arnvag(active, since 28m), standbys: vx-rg23-rk65-u43-130-1.pxmyie mds: cephfs:1 {0=cephfs.vx-rg23-rk65-u43-130.kzjznt=up:active} 1 up:standby osd: 24 osds: 24 up (since 69m), 24 in (since 3w) task status: scrub status: mds.cephfs.vx-rg23-rk65-u43-130.kzjznt: idle data: pools: 4 pools, 97 pgs objects: 1.38k objects, 4.8 GiB usage: 35 GiB used, 87 TiB / 87 TiB avail pgs: 97 active+clean io: client: 5.3 KiB/s wr, 0 op/s rd, 0 op/s wr progress: Upgrade to docker.io/ceph/ceph:v15.2.2 (33s) [=...........................] (remaining: 9m) Isn't both mons already up? I have no way to add third mon btw. Thnaks, Gencer. On 20.05.2020 16:21:03, Ashley Merrick <singapore@xxxxxxxxxxxxxx [mailto:singapore@xxxxxxxxxxxxxx]> wrote: Yes, I think it's because your only running two mons, so the script is halting at a check to stop you being in the position of just one running (no backup). I had the same issue with a single MGR instance and had to add a second to allow to upgrade to continue, can you bring up an extra MON? Thanks ---- On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç <gencer@xxxxxxxxxxxxx [mailto:gencer@xxxxxxxxxxxxx]> wrote ---- Hi Ashley, I see this: [INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id 4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa [INF] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130 Does this meaning anything to you? I've also attached full log. See especially after line #49. I stopped and restart upgrade there. Thanks, Gencer. On 20.05.2020 16:13:00, Ashley Merrick <singapore@xxxxxxxxxxxxxx [mailto:singapore@xxxxxxxxxxxxxx]> wrote: ceph config set mgr mgr/cephadm/log_to_cluster_level debug ceph -W cephadm --watch-debug See if you see anything that stands out as an issue with the update, seems it has completed only the two MGR instances If not: ceph orch upgrade stop ceph orch upgrade start --ceph-version 15.2.2 and monitor the watch-debug log Make sure at the end you run: ceph config set mgr mgr/cephadm/log_to_cluster_level info ---- On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç <gencer@xxxxxxxxxxxxx [mailto:gencer@xxxxxxxxxxxxx]> wrote ---- Ah yes, { "mon": { "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 2 }, "mgr": { "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable)": 2 }, "osd": { "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 24 }, "mds": { "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 2 }, "overall": { "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 28, "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable)": 2 } } How can i fix this? Gencer. On 20.05.2020 16:04:33, Ashley Merrick <singapore@xxxxxxxxxxxxxx [mailto:singapore@xxxxxxxxxxxxxx]> wrote: Does: ceph versions show any services yet running on 15.2.2? ---- On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç <gencer@xxxxxxxxxxxxx [mailto:gencer@xxxxxxxxxxxxx]> wrote ---- Hi Ashley, $ ceph orch upgrade status { "target_image": "docker.io/ceph/ceph:v15.2.2", "in_progress": true, "services_complete": [], "message": "" } Thanks, Gencer. On 20.05.2020 15:58:34, Ashley Merrick <singapore@xxxxxxxxxxxxxx [mailto:singapore@xxxxxxxxxxxxxx]> wrote: What does ceph orch upgrade status show? ---- On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç <gencer@xxxxxxxxxxxxx [mailto:gencer@xxxxxxxxxxxxx]> wrote ---- Hi, I've 15.2.1 installed on all machines. On primary machine I executed ceph upgrade command: $ ceph orch upgrade start --ceph-version 15.2.2 When I check ceph -s I see this: progress: Upgrade to docker.io/ceph/ceph:v15.2.2 (30m) [=...........................] (remaining: 8h) It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get stuck at this point. Is there any way to know why this has stuck? Thanks, Gencer. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx [mailto:ceph-users@xxxxxxx] To unsubscribe send an email to ceph-users-leave@xxxxxxx [mailto:ceph-users-leave@xxxxxxx] _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx