I have 3 MON, I don't know why it's showing only one. root@osswrkprbe001:~# ceph --connect-timeout 60 status Cluster connection interrupted or timed out cephadm logs --name mon.osswrkprbe001 --> Is there any way to go to a specific date? Because it stars from Oct 4. I want to check from Oct 16 and ahead. I suspect that something happened that day. Also, I don't know how to troubleshoot this. I did the same (./cephadm logs --name mon.osswrkprbe002) in the second MON but it starts the logs from Sep 30. I would need to check Oct 16 also. I would appreciate if you can help me with the troubleshooting. Thank you. Saludos, EMANUEL CASTELLI Arquitecto de Información - Gerencia OSS C: (+549) 116707-4107 | Interno: 1325 | T-Phone: 7510-1325 | ecastelli@xxxxxxxxxxxxxxxxx Lavardén 157 1er piso. CABA (C1437FBC) ----- Original Message ----- From: "Eugen Block" <eblock@xxxxxx> To: "ceph-users" <ceph-users@xxxxxxx> Sent: Tuesday, October 20, 2020 10:02:35 AM Subject: Re: Problems with ceph command - Octupus - Ubuntu 16.04 Your mon container seems up and running, have you tried restarting it? You just have one mon, is that correct? Do you see anything in the logs? cephadm logs --name mon.osswrkprbe001 How long do you wait until you hit CTRL-C? There's a connection-timeout option for ceph commands, maybe try a higher timeout? ceph --connect-timeout 60 status Is the node hosting the mon showing any issues in dmesg, df -h, syslog, etc.? Regards, Eugen Zitat von Emanuel Alejandro Castelli <ecastelli@xxxxxxxxxxxxxxxxx>: > Hello > > > I'm facing an issue with ceph. I cannot run any ceph command. It > literally hangs. I need to hit CTRL-C to get this: > > > > > ^CCluster connection interrupted or timed out > > > > > This is on Ubuntu 16.04. Also, I use Graphana with Prometheus to get > information from the cluster, but now there is no data to graph. Any > clue? > > > BQ_BEGIN > > > cephadm version > BQ_END > > BQ_BEGIN > > > INFO:cephadm:Using recent ceph image ceph/ceph:v15 ceph version > 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable) > BQ_END > > cephadm ls > [ > { > "style": "cephadm:v1", > "name": "mon.osswrkprbe001", > "fsid": "56820176-ae5b-4e58-84a2-442b2fc03e6d", > "systemd_unit": > "ceph-56820176-ae5b-4e58-84a2-442b2fc03e6d@mon.osswrkprbe001", > "enabled": true, > "state": "running", > "container_id": > "afbe6ef76198bf05ec972e832077849d4a4438bd56f2e177aeb9b11146577baf", > "container_image_name": "docker.io/ceph/ceph:v15.2.1", > "container_image_id": > "bc83a388465f0568dab4501fb7684398dca8b50ca12a342a57f21815721723c2", > "version": "15.2.1", > "started": "2020-10-19T19:03:16.759730", > "created": "2020-09-04T23:30:30.250336", > "deployed": "2020-09-04T23:48:20.956277", > "configured": "2020-09-04T23:48:22.100283" > }, > { > "style": "cephadm:v1", > "name": "mgr.osswrkprbe001", > "fsid": "56820176-ae5b-4e58-84a2-442b2fc03e6d", > "systemd_unit": > "ceph-56820176-ae5b-4e58-84a2-442b2fc03e6d@mgr.osswrkprbe001", > "enabled": true, > "state": "running", > "container_id": > "1737b2cf46310025c0ae853c3b48400320fb35b0443f6ab3ef3d6cbb10f460d8", > "container_image_name": "docker.io/ceph/ceph:v15.2.1", > "container_image_id": > "bc83a388465f0568dab4501fb7684398dca8b50ca12a342a57f21815721723c2", > "version": "15.2.1", > "started": "2020-10-19T20:43:38.329529", > "created": "2020-09-04T23:30:31.110341", > "deployed": "2020-09-04T23:47:41.604057", > "configured": "2020-09-05T00:00:21.064246" > } > ] > > > Thank you in advance. > > > Saludos, > > > > EMANUEL CASTELLI > > Arquitecto de Información - Gerencia OSS > > C: (+549) 116707-4107 | Interno: 1325 | T-Phone: 7510-1325 | > ecastelli@xxxxxxxxxxxxxxxxx > > Lavardén 157 1er piso. CABA (C1437FBC) > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx