On Fri, May 6, 2022 at 5:58 AM Harry G. Coin <hgcoin@xxxxxxxxx> wrote: > > I tried searching for the meaning of a ceph Quincy all caps WARNING > message, and failed. So I need help. Ceph tells me my cluster is > 'healthy', yet emits a bunch of 'progress WARNING root] comlete ev' ... > messages. Which I score right up there with the helpful dmesg "yama, > becoming mindful", > > Should I care, and if I should, what is to be done? Here's the log snip: Well, I've never seen it before (and I don't work on that code), but this error came from the progress module, which is the thing that gives you pretty little charts about how long until stuff like rebalancing finishes. It seems to happen when it gets a report of something being completed, that the module itself doesn't remember. So you should probably grab (or generate and grab) some mgr logs and create a ticket for it so the team can prevent that happening, but I also don't think it's something to worry about. -Greg > > May 6 07:48:51 noc3 bash[3206]: cluster 2022-05-06T12:48:49.294641+0000 > mgr.noc3.sybsfb (mgr.14574839) 20656 : cluster [DBG] pgmap v19338: 1809 > pgs: 2 active+clean+scrubbing+deep, 1807 active+clean; 16 TiB data, 41 > TiB used, 29 TiB / 70 TiB avail; 469 KiB/s rd, 4.7 KiB/s wr, 2 op/s > May 6 07:48:51 noc3 bash[3206]: audit 2022-05-06T12:48:49.313491+0000 > mon.noc1 (mon.3) 336 : audit [DBG] from='mgr.14574839 > [fc00:1002:c7::43]:0/501702592' entity='mgr.noc3.sybsfb' cmd=[{"prefix": > "config dump", "format": "json"}]: dispatch > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.224+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > dc5810d7-7a30-4c8f-bafa-3158423c49f3 does not exist > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.224+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > c81b591e-6498-41bd-98bb-edbf80c690f8 does not exist > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.224+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > a9632817-10e7-4a60-ae5c-a4220d7ca00b does not exist > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.224+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > 29a7ca4d-6e2a-423a-9530-3f61c0dcdbfe does not exist > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.228+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > 68de11a0-92a4-48b6-8420-752bcdd79182 does not exist > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.228+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > a9437122-8ff8-4de9-a048-8a3c0262b02c does not exist > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.228+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > f15c0540-9089-4a96-884e-d75668f84796 does not exist > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.228+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > eeaf605a-9c55-44c9-9c69-8c7c35ca7591 does not exist > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.228+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > ba0ff860-4fc5-4c84-b337-1c8c616b5fbd does not exist > May 6 07:48:52 noc3 bash[3203]: debug 2022-05-06T12:48:52.228+0000 > 7f2e20629700 0 [progress WARNING root] complete: ev > 656fcf28-3ce1-4d6d-8ec2-eac5b6f0a233 does not exist > May 6 07:48:52 noc3 bash[3203]: ::ffff:10.12.112.66 - - > [06/May/2022:12:48:52] "GET /metrics HTTP/1.1" 200 421310 "" > "Prometheus/2.33.4" > May 6 07:48:53 noc3 bash[3206]: audit 2022-05-06T12:48:51.273954+0000 > mon.noc1 (mon.3) 337 : audit [INF] from='mgr.14574839 > [fc00:1002:c7::43]:0/501702592' entity='mgr.noc3.sybsfb' cmd=[{"prefix": > "config rm", "format": "json", "who": "client", "name": > "mon_cluster_log_file_level"}]: dispatch > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx