On Wed, Jan 14, 2015 at 09:28:56PM +0100, Johannes Sixt wrote: > For some unknown reason, the dd on my Windows box segfaults randomly, > but since recently, it does so much more often than it used to, which > makes running the test suite burdensome. > > Use printf to write large files instead of dd. To emphasize that three > of the large blobs are exact copies, use cp to allocate them. > > The new code makes the files a bit smaller, and they are not sparse > anymore, but the tests do not depend on these properties. We do not want > to use test-genrandom here (which is used to generate large files > elsewhere in t1050), so that the files can be compressed well (which > keeps the run-time short). Thanks, this version looks good to me. > The files are now large text files, not binary files. But since they > are larger than core.bigfilethreshold they are diagnosed as binary > by Git. For this reason, the 'git diff' tests that check the output > for "Binary files differ" still pass. I was less concerned with tests not passing, as much as tests ending up testing nothing (which is very hard to test automatically, as you would have to recreate the original bug!). But I think it is fine, as text is more likely to get malloc'd than a binary (and these tests are really about making sure we avoid huge mallocs). > @@ -162,7 +162,7 @@ test_expect_success 'pack-objects with large loose > object' ' Funny wrapping here. I imagine Junio can manage to apply it anyway, but you may want to check your MUA settings. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html