been just being naive) --- I can't remember the exact name, but I
remember
using (on some Linux flavor) an API call that fills a struct with
data on the
resource usage for the process, including CPU time; I assume measured
with precision (that is, immune to issues of other applications running
simultaneously, or other random events causing the measurement to be
polluted by random noise).
since what we are looking for here is a reasonable first
approximation, not perfection I don't think we should worry much about
pollution of the value.
Well, it's not as much worrying as it is choosing the better among two
equally
difficult options --- what I mean is that obtaining the *real* resource
usage as
reported by the kernel is, from what I remember, equally hard as it is
obtaining
the time with milli- or micro-seconds resolution.
So, why not choosing this option? (in fact, if we wanted to do it "the
scripted
way", I guess we could still use "time test_cpuspeed_loop" and read the
report
by the command time, specifying CPU time and system calls time.
As for 32/64 bit --- doesn't PG already know that information? I mean,
./configure does gather that information --- does it not?
we're not talking about comiling PG, we're talking about getting sane
defaults for a pre-compiled binary. if it's a 32 bit binary assume a
32 bit cpu, if it's a 64 bit binary assume a 64 bit cpu (all hardcoded
into the binary at compile time)
Right --- I was thinking that configure, which as I understand,
generates the
Makefiles to compile applications including initdb, could plug those values
as compile-time constants, so that initdb (or a hypothetical additional
utility
that would do what we're discussing in this thread) already has them.
Anyway,
yes, that would go for the binaries as well --- we're pretty much saying
the
same thing :-)
Carlos
--