Hi Greg,
I am using 0.86
refering to osd logs to check scrub behaviour.. Please have look at log snippet from osd log
##Triggered scrub on osd.10--->
2014-11-12 16:24:21.393135 7f5026f31700 0 log_channel(default) log [INF] : 0.4 scrub ok
2014-11-12 16:24:24.393586 7f5026f31700 0 log_channel(default) log [INF] : 0.20 scrub ok
2014-11-12 16:24:30.393989 7f5026f31700 0 log_channel(default) log [INF] : 0.21 scrub ok
2014-11-12 16:24:33.394764 7f5026f31700 0 log_channel(default) log [INF] : 0.23 scrub ok
2014-11-12 16:24:34.395293 7f5026f31700 0 log_channel(default) log [INF] : 0.36 scrub ok
2014-11-12 16:24:35.941704 7f5026f31700 0 log_channel(default) log [INF] : 1.1 scrub ok
2014-11-12 16:24:39.533780 7f5026f31700 0 log_channel(default) log [INF] : 1.d scrub ok
2014-11-12 16:24:41.811185 7f5026f31700 0 log_channel(default) log [INF] : 1.44 scrub ok
2014-11-12 16:24:54.257384 7f5026f31700 0 log_channel(default) log [INF] : 1.5b scrub ok
2014-11-12 16:25:02.973101 7f5026f31700 0 log_channel(default) log [INF] : 1.67 scrub ok
2014-11-12 16:25:17.597546 7f5026f31700 0 log_channel(default) log [INF] : 1.6b scrub ok
##Previous scrub is still in progress, triggered scrub on osd.10 again--> CEPH re-started scrub operation
20104-11-12 16:25:19.394029 7f5026f31700 0 log_channel(default) log [INF] : 0.4 scrub ok
2014-11-12 16:25:22.402630 7f5026f31700 0 log_channel(default) log [INF] : 0.20 scrub ok
2014-11-12 16:25:24.695565 7f5026f31700 0 log_channel(default) log [INF] : 0.21 scrub ok
2014-11-12 16:25:25.408821 7f5026f31700 0 log_channel(default) log [INF] : 0.23 scrub ok
2014-11-12 16:25:29.467527 7f5026f31700 0 log_channel(default) log [INF] : 0.36 scrub ok
2014-11-12 16:25:32.558838 7f5026f31700 0 log_channel(default) log [INF] : 1.1 scrub ok
2014-11-12 16:25:35.763056 7f5026f31700 0 log_channel(default) log [INF] : 1.d scrub ok
2014-11-12 16:25:38.166853 7f5026f31700 0 log_channel(default) log [INF] : 1.44 scrub ok
2014-11-12 16:25:40.602758 7f5026f31700 0 log_channel(default) log [INF] : 1.5b scrub ok
2014-11-12 16:25:42.169788 7f5026f31700 0 log_channel(default) log [INF] : 1.67 scrub ok
2014-11-12 16:25:45.851419 7f5026f31700 0 log_channel(default) log [INF] : 1.6b scrub ok
2014-11-12 16:25:51.259453 7f5026f31700 0 log_channel(default) log [INF] : 1.a8 scrub ok
2014-11-12 16:25:53.012220 7f5026f31700 0 log_channel(default) log [INF] : 1.a9 scrub ok
2014-11-12 16:25:54.009265 7f5026f31700 0 log_channel(default) log [INF] : 1.cb scrub ok
2014-11-12 16:25:56.516569 7f5026f31700 0 log_channel(default) log [INF] : 1.e2 scrub ok
-Thanks & regards,
Mallikarjun Biradar
On Tue, Nov 11, 2014 at 12:18 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
What version of Ceph are you seeing this on? How are you identifyingOn Sun, Nov 9, 2014 at 9:29 PM, Mallikarjun Biradar
<mallikarjuna.biradar@xxxxxxxxx> wrote:
> Hi all,
>
> Triggering shallow scrub on OSD where scrub is already in progress, restarts
> scrub from beginning on that OSD.
>
>
> Steps:
> Triggered shallow scrub on an OSD (Cluster is running heavy IO)
> While scrub is in progress, triggered shallow scrub again on that OSD.
>
> Observed behavior, is scrub restarted from beginning on that OSD.
>
> Please let me know, whether its expected behaviour?
that scrub is restarting from the beginning? It sounds sort of
familiar to me, but I thought this was fixed so it was a no-op if you
issue another scrub. (That's not authoritative though; I might just be
missing a reason we want to restart it.)
-Greg
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com