That seems to be scrubbing pretty often. Can you attach a config diff from osd.4 (ceph daemon osd.4 config diff)? -Sam On Tue, Mar 29, 2016 at 9:30 AM, German Anders <ganders@xxxxxxxxxxxx> wrote: > Hi All, > > I've maybe a simple question, I've setup a new cluster with Infernalis > release, there's no IO going on at the cluster level and I'm receiving a lot > of these messages: > > 2016-03-29 12:22:07.462818 mon.0 [INF] pgmap v158062: 8192 pgs: 8192 > active+clean; 20617 MB data, 46164 MB used, 52484 GB / 52529 GB avail > 2016-03-29 12:22:08.176684 osd.13 [INF] 0.d38 scrub starts > 2016-03-29 12:22:08.179841 osd.13 [INF] 0.d38 scrub ok > 2016-03-29 12:21:59.526355 osd.9 [INF] 0.8a6 scrub starts > 2016-03-29 12:21:59.529582 osd.9 [INF] 0.8a6 scrub ok > 2016-03-29 12:22:03.004107 osd.4 [INF] 0.38b scrub starts > 2016-03-29 12:22:03.007220 osd.4 [INF] 0.38b scrub ok > 2016-03-29 12:22:03.617706 osd.21 [INF] 0.525 scrub starts > 2016-03-29 12:22:03.621073 osd.21 [INF] 0.525 scrub ok > 2016-03-29 12:22:06.527264 osd.9 [INF] 0.8a6 scrub starts > 2016-03-29 12:22:06.529150 osd.9 [INF] 0.8a6 scrub ok > 2016-03-29 12:22:07.005628 osd.4 [INF] 0.38b scrub starts > 2016-03-29 12:22:07.009776 osd.4 [INF] 0.38b scrub ok > 2016-03-29 12:22:07.618191 osd.21 [INF] 0.525 scrub starts > 2016-03-29 12:22:07.621363 osd.21 [INF] 0.525 scrub ok > > > I mean, all the time, and AFAIK these is because the scrub operation is like > an fsck on the object level, so this make me think that it's not a normal > situation. Is there any command that I can run in order to check this? > > # ceph --cluster cephIB health detail > HEALTH_OK > > > Thanks in advance, > > Best, > > German > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com