Re: [PATCH v2 4/5] convert: generate large test files only once

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 27, 2016 at 04:35:32AM +0200, Torsten Bögershausen wrote:

> > +	mkdir -p generated-test-data &&
> > +	for i in $(test_seq 1 $T0021_LARGE_FILE_SIZE)
> > +	do
> > +		# Generate 1MB of empty data and 100 bytes of random characters
> > +		printf "%1048576d" 1
> > +		printf "$(LC_ALL=C tr -dc "A-Za-z0-9" </dev/urandom | dd bs=$((RANDOM>>8)) count=1 2>/dev/null)"
> I'm not sure how portable /dev/urandom is.
> The other thing, that "really random" numbers are an overkill, and
> it may be easier to use pre-defined numbers,

Right, there are a few reasons not to use /dev/urandom:

  - it's not portable

  - if we have to generate a lot of numbers, it drains the system's
    entropy pool, which is an unfriendly thing to do (and may also be
    slow)

  - it makes our tests random! This sounds like a good thing, but it
    means that if some input happens to cause failure, you are unlikely
    to be able to reproduce it.

Instead, use test-genrandom, which is an LCG that starts at a seed. So
you get a large amount of random-ish quickly and portably, and you get
the same data each time.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]