Update:
Sorry to reply to my own post, but...
After posting the following I realized that it can't be just a stdout
redirect issue in view of the fact that I can recreate the same issue if I
fopen(, "w") a disk file from a C program. I've modified the subject
accordingly.
Dave
On Fri, 28 Sep 2018, Dave Ulrick wrote:
While debugging a custom program that can write large output files to stdout,
I noticed that the run time as displayed by the Bash 'time' prefix wildly
varied from run to run. It turns out that the issue isn't just with my
program. I can recreate it with 'cat':
$ time cat infile >outfile
If 'infile' is on the order of 140 MB, 'time' might show something as low as:
real 0m0.146s
user 0m0.000s
sys 0m0.109s
CPU % 74.29
or as high as:
real 0m0.328s
user 0m0.000s
sys 0m0.109s
CPU % 33.31
If 'outfile' doesn't exist, the 'cat' runs much more quickly:
real 0m0.082s
user 0m0.000s
sys 0m0.081s
CPU % 99.77
Likewise, if I arrange to 'rm' the file right before running 'cat', it runs
consistently quickly:
$ time rm -f outfile
real 0m0.019s
user 0m0.000s
sys 0m0.019s
CPU % 99.93
$ time cat infile >outfile
real 0m0.081s
user 0m0.000s
sys 0m0.081s
CPU % 99.75
If I do these two commands in this order over and over, the timings for 'cat'
are consistently fast.
In my custom C program, I have the same issue if I write to stdout or if I
'fopen' a named file like this:
fopen(outfile_name, "w")
If the file doesn't exist, the program runs consistently quickly:
real 0m0.191s
user 0m0.082s
sys 0m0.109s
CPU % 99.73
but if it exists the run times fluctuate quite a bit from run to run:
real 0m0.579s
user 0m0.072s
sys 0m0.147s
CPU % 37.85
real 0m0.473s
user 0m0.072s
sys 0m0.148s
CPU % 46.60
If I 'fopen' a named disk file right after doing unlink(outfile_name), I get
consistently fast run times.
Note: I don't think this is an issue with buffered I/O to stdout per se. I
say this because I'm seeing pretty consistent run times if I write to stdout
_as long as the output file didn't previously exist_. Rather, the issue seems
to be with _overwriting_ an existing disk file when I/O redirection is used.
Are there any known issues with the Linux kernel, 'bash', glibc, or whatever
else might be involved with standard I/O that might cause this issue? If this
is not a known issue, what's the best way to open a bug ticket?
I'm running Fedora 27 with the latest upgrades. The Linux kernel is
4.18.9-100.fc27.x86_64, glibc is 2.26-30, and bash is 4.4.23-1.
Thanks,
Dave
--
Dave Ulrick
d-ulrick@xxxxxxxxxxx
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx