CxxUnit is your friend. I have found that writing testcases _before_ fixing code speeds development cycles and prevents backwards steps. Write test case, prove it fails Fix code prove testcase passes Next pass, you run all of the testcases that have ever been made and confirm that you haven't broken anything else. Before release, all tests must pass. If a test fails, you must determine whether the test is broken or the code is broken, and fix whichever is broken. Over time, even doing this on an adhoc basis, you end up with a fairly complete automated test suite that "proves" all of the areas that have ever been problems. It still takes time to run these tests. However, four hours of the machine running all-out can run a lot more test scenarios than I could manually. Automating the test setup, execution, and teardown means that I can guarantee the scenario I want to test - there is no guesswork or falling into patterns (in the long run). I have an app that runs roughly 700 million dollars of business per year that has had zero support personnel for two years. The use of open source automated regression testing frameworks written in the programming language of the product has allowed the department responsible for the product to thoroughly re-test any outside impacts even though they have no programmers on staff familiar with its programming environment or language. Tying the open source testing library to the product has allowed me to build intelligent diagnostics, which in turn allows the product to explain to the user what is wrong, who to contact, and/or what to do to fix the problem themselves. CxxUnit is your friend. _______________________________________________ wine-users mailing list wine-users@xxxxxxxxxx http://www.winehq.org/mailman/listinfo/wine-users