Hi Matthew, Some of the logging was intentionally removed because it used to clutter up the logs. However, we are bringing back some of the useful stuff back and have a tracker ticket https://tracker.ceph.com/issues/37886 open for it. Thanks, Neha On Thu, Jan 24, 2019 at 12:13 PM Stefan Kooman <stefan@xxxxxx> wrote: > > Quoting Matthew Vernon (mv3@xxxxxxxxxxxx): > > Hi, > > > > On our Jewel clusters, the mons keep a log of the cluster status e.g. > > > > 2019-01-24 14:00:00.028457 7f7a17bef700 0 log_channel(cluster) log [INF] : > > HEALTH_OK > > 2019-01-24 14:00:00.646719 7f7a46423700 0 log_channel(cluster) log [INF] : > > pgmap v66631404: 173696 pgs: 10 active+clean+scrubbing+deep, 173686 > > active+clean; 2271 TB data, 6819 TB used, 9875 TB / 16695 TB avail; 1313 > > MB/s rd, 236 MB/s wr, 12921 op/s > > > > This is sometimes useful after a problem, to see when thing started going > > wrong (which can be helpful for incident response and analysis) and so on. > > There doesn't appear to be any such logging in Luminous, either by mons or > > mgrs. What am I missing? > > Our mons keep a log in /var/log/ceph/ceph.log (running luminous 12.2.8). > Is that log present on your systems? > > Gr. Stefan > > -- > | BIT BV http://www.bit.nl/ Kamer van Koophandel 09090351 > | GPG: 0xD14839C6 +31 318 648 688 / info@xxxxxx > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com