On Fri, Sep 02, 2005 Jon Burgess wrote: > While reading through the frontend tuning thread code I noticed that the > algorithm in update_delay() looks wrong. The code doesn't have any > comments in it to indicate what it is intended to do, but the way I read > it, the intention is to slow down the scanning if we've recently locked > and make it faster when we've not locked for a while. ... > The code does the right thing if the quality level is > 128, by > returning an increasing delay as the quality increases (i.e. more delay > when we are locked most of the time). > > What I think is wrong is that as the quality goes below 128 the delay > also starts to increase! This is because q2 goes negative and so q2*q2 > is positive and increases as quality goes to zero. If we never get a > lock then the delay is just as long as if we have a near perfect lock. > > I reckon the behaviour needs to be adjusted as per this patch so that > the code returns "min_delay" for all values of quality < 128. It should > help recover more quickly if the signal has been unlocked for a long time. > > Does this make things better? > Or is the large delay when the quality is near zero intentional? Good question. I guess that if you have a really bad signal you want to slow down the zig-zag scan. What this piece of uncommented voodoo code seems to do is to start a zig-zag scan fast, and if no signal is found, get slower and slower. Makes sense to me, don't you think so? Johannes