Re: [PATCH v6 10/13] convert: generate large test files only once

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 29 Aug 2016, at 19:52, Junio C Hamano <gitster@xxxxxxxxx> wrote:
> 
> Lars Schneider <larsxschneider@xxxxxxxxx> writes:
> 
>>> On 25 Aug 2016, at 21:17, Stefan Beller <sbeller@xxxxxxxxxx> wrote:
>>> 
>>>> On Thu, Aug 25, 2016 at 4:07 AM,  <larsxschneider@xxxxxxxxx> wrote:
>>>> From: Lars Schneider <larsxschneider@xxxxxxxxx>
>>>> 
>>>> Generate more interesting large test files
>>> 
>>> How are the large test files more interesting?
>>> (interesting in the notion of covering more potential bugs?
>>> easier to debug? better to maintain, or just a pleasant read?)
>> 
>> The old large test file was 1MB of zeros and 1 byte with a one, repeated 2048 times.
>> 
>> Since the filter uses 64k packets we would test a large number of equally looking packets.
>> 
>> That's why I thought the pseudo random content is more interesting.
> 
> I guess my real question is why it is not just a single invocation
> of test-genrandom that gives you the whole test file; if you are
> using 20MB, the simplest would be to grab 20MB out of test-genrandom.
> With that hopefully you won't see large number of equally looking
> packets, no?

True, but applying rot13 (via tr ...) on 20+ MB takes quite a bit of
time. That's why I came up with the 1M SP in between.

However, I realized that testing a large amount of data is not really
necessary for the final series. A single packet is 64k. A 500k pseudo random
test file should be sufficient. This will make the test way simpler.

Thanks,
Lars



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]