Hi,
This still seems really non-intuitive to me. It seems to me this is down to the special "0 means init" behaviour, that you don't want, because your values actually fluctate between 0 and 100, and you can actually legitimately reach 0 with a lot of successes. But to me it's really non-intuitive, if not counter-intuitive, to say "oh yeah my values are 0 to 100 inclusive, but I can't ever deal with reaching 0 because then I jump to 100 immediately". That doesn't make much sense to me? I mean, I guess I can see where this patch makes some sense like from a code point of view, this is sort of the minimal code change you could make to make the existing code work, but ... I'd argue you're optimising to the wrong metric here, "minimal code changes to fix the bug" should normally not be your metric, it should be "code changes that make this clearer and avoid the problem", or something like that? Anyway I guess that's all a long-winded way of saying that I don't really agree with this change. To me, basically, I see two ways to solve this: 1) we have DECLARE_EWMA_ZERO_VALID() or something like that which *doesn't* treat 0 as an uninitialized value, and either has a separate "not initialized yet" bit (but that's iffy storage wise), or simply has another argument to _init() for the initial value or so. 2) you don't just don't use 0 and 100 but say 1 and 100, that results in basically the same behaviour, but avoids the special 0. johannes
I also ran into that problem in the past, and reviewing it again with a college, I think, this is a real bug in the EWMA implementation. I try to provide a proper patch in the next days, but actually the EWMA handles the internal value zero, always like in the initialization, which is wrong, e.g., for positive/negative averaged values.
A quick research shows, this bug is since the first implementation of the ewma in the code ...
kind regards Benjamin