> I am not sure if that is the "proper" way to do it - because I am far > from convinced that there /is/ a good way to harden software against > memory errors using only software. You harden your software so that, if the hardware fails, the software can detect the problem and deal with it accordingly .. It very rarely works out the other way around. :) The purpose for duplicating variables is to detect when a bitflip has occurred due to cosmic radiation or some such highly rare, but still feasible, event, and then be able to deal with it in software, without producing some wrong effect. > "Real" solutions to hardening systems against unexpected errors in > memory are done in hardware. The most obvious case is to use ECC > memory. The safety-critical world has been taught to *never* rely on this. Bitflips do occur in the field, ECC be damned .. they are rare, but they do happen. ECC protection ends at the RAM bus .. what happens if a bitflip occurs in CPU register? Its rare, but it does occur. > For more advanced reliability, you use two processor cores in > lock-step (this is done in some car engine controllers, for example). Or 2-out-of-3 configurations, and so on .. > The next step up is to do things in triplicate and use majority voting > (common on satellites and other space systems). Common on rail/transportation systems, too - check my sig, this is what I do.. ;) > "Hardening" software by hacking the compiler to generate duplicate > variables sounds like an academic exercise at best. It could be a very interesting feature if incorporated into gcc mainline, some day, though ... I like the idea of having built-in variable duplication as a compile option, but I have no idea how it would be done in a way that satisfies GCC as a whole. -- ; Thales Austria GmbH Jay Vaughan, Scheydgasse 41 Software Developer 1210 Vienna AUSTRIA ============================================--------------------