Lea Wiemann wrote: > Jakub Narebski wrote: > > > > 1.) Should we put all tests in one file, or should they be split > > I'd suggest we leave it in a single file until test execution time > becomes an issue. Then (when it has become too large) we'll be able to > figure out good boundaries along which to split the test suite. I wanted to split tests mainly not because of performance, but because of making it easier to maintain. Although perhaps single driver-test, and do()-ing or require()-ing sub-files would be enough. > > 2.) What invariants should we test [...] Checking for example if all items > > are listed in a 'tree' view, or if all inner links (#link) are > > valid would be a good start... > > Yup; completeness of item lists is especially relevant for paginated > output. Also check for the presence and validity of links (like > "parent" links, etc.), and for the presence of certain elements (like > the file modes in the tree view). For example if "next" (and like) views really lead to next page. > Also, with a $ENV{LONG_GIT_TEST} variable or so, we could automatically > validate all links for each page we're checking -- it takes a long time, > but it's still way more efficient than exhaustive spidering of the whole > site. Good idea. I would examine how it is done in other tests. > > (by the way, is there some Perl module for RSS, Atom and OPML validation?) > > I can't find anything on Google right now, I usually search CPAN first, not Google... > but piping them into external > validators might be just as fine. Also, since those formats are > generated using print statements (which is really error-prone for XML > formats), I'd say that a good start would be to check for XML validity. We can use Test::XML / Test::XML::Valid / Test::XML::Simple for being well-formed XML. If RSS / Atom / OPML have good DTD / XML Schema / / Relax-NG schema / Sablotron rules, they could be checked using that from Perl. > > 3.) What invariants you want to test for your caching efforts, e.g. > > checking if cached output matches non-cached > > How about this: > > 1. Run the Mechanize tests (and possibly also the existing t9500 tests) > *without* caching, recording the URL's and contents of all pages the > test suite accesses. > > 2. Get all those URL's again *with* caching (from a cold cache), and > assert that the output is identical. How would you ensure cold cache? > 3. Get all those URL's again *with* caching (from a warm cache), and > assert that the output is identical. Well, it might be identical, but it also might have "cached output" somewhere in the output. > Perhaps also assert that no call > to the git binary is made (i.e. everything has actually been cached). > (Of course we might need options for the production site to not cache > certain things, but let's defer this discussion.) Or at least (if we don't cache everything, and that could be good idea) to check if there are less git binary calls. -- Jakub Narebski Poland -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html