On Sat, Sep 29, 2007, Manu Abraham wrote: > ... Instead of losing myself in the details of your questions, some background info: > 1) LNB drift - LNBs have a constant error plus a temperature drift (e.g. +/-1MHz error, +/-3Mhz drift for a temperature range of -40 ... +60 °C -- cheap no name equipment usually worse) - the demod can only compensate for a limited frequency offset (e.g. both stv0299 and stb0899 QPSK are specced to +/-5% fM_CLK for "Carrier loop capture range", where fM_CLK is typically 88MHz for stv0299 and 108MHz for stb0899, thus 4.4 MHz vs. 5.4 MHz. But note that this are best case figures which only hold with good CNR.) - if the LNB error + drift is higer than what the demod can capture, then tuning fails - _only when initial tuning fails_ does the sw zig-zag kick in, and it attempts to tune in increasing steps around the nominal frequency until the demod either locks on the signal, or the scanned frequency range covers the complete channel bandwidth (we want to avoid locking on the neighbour channel; note that adjacent channels on satellite use different polarization so we can't lock there unless we really stepped way too far) - the parameters for sw zig-zag are provided by the demod driver in struct dvb_frontend_tune_settings int min_delay_ms; // when to assume tuning failed -> do next step int step_size; // size of zig-zag step int max_drift; // when to stop zig-zagging a demod driver can disable sw zig-zag by setting step_size and max_drift to zero (which is what DVB-T and DVB-C drivers do) - sw zig-zag is by no means stv0299 specific and is used by (almost?) all DVB-S demod drivers The bottom line is that: 1. zig-zag doesn't slow down tuning, because it only ever kicks in when the initial tuning attempt failed (however, it is possible that a driver sets min_delay_ms too small, then zig-zag kicks in too soon and ruins your day) 2. zig-zag tries harder to tune, and makes users happy, even if tuning might take some time; without zig-zag, all you can do is tell your user "sorry, no signal found" 3. once zig-zag succeeded, the offset (drift compensation) is stored and reused at the next channel switch -- thus not every tuning is slowed down even if there is a large offset 4. zig-zag could also be implemented in user space, but IMHO it's better the way it is now -- since some hw doesn't heed sw zig-zag, and the ones that need it need different parameters IIRC Andrew de Quincey spent significant time optimizing the zig-zag code and the parameters for various frontends. > 2) Inversion AUTO In the old days there were literally two wires with the I and Q signals running from the baseband processor to the QPSK modulator. It probably was a common mistake that someone messed up the wiring at the broadcaster. Nowadays the equipment is integrated and the inversion setting is just a check box in the control software. Still broadcasters seem to set it at random. Like Felix explained, at the receiver side you don't know if the spectrum is inverted. If the demod firmware doesn't handle it, you have to try both inversion settings in sw. And like with zig-zag, you could do it in userspace but IMHO it's better to let the core handle it. Apps which want to optimize tuning performance could use FE_GET_FRONTEND after successful tuning to get the real inversion setting, and use that instead of AUTO. The reason why scan/szap don't do this is that a) not all drivers get it right, and b) for DVB-C and DVB-S inversion settings can be regionally different -- we don't want channel lists which only work with some cards in some regions. HTH, Johannes _______________________________________________ linux-dvb mailing list linux-dvb@xxxxxxxxxxx http://www.linuxtv.org/cgi-bin/mailman/listinfo/linux-dvb