On Thu, 3 May 2007, Carlos Moreno wrote:
> CPUs, 32/64bit, or clock speeds. So any attempt to determine "how
> fast"
> a CPU is, even on a 1-5 scale, requires matching against a database of
> regexes which would have to be kept updated.
>
> And let's not even get started on Windows.
I think the only sane way to try and find the cpu speed is to just do a
busy loop of some sort (ideally something that somewhat resembles the main
code) and see how long it takes. you may have to do this a few times until
you get a loop that takes long enough (a few seconds) on a fast processor
I was going to suggest just that (but then was afraid that again I may have
been just being naive) --- I can't remember the exact name, but I remember
using (on some Linux flavor) an API call that fills a struct with data on the
resource usage for the process, including CPU time; I assume measured
with precision (that is, immune to issues of other applications running
simultaneously, or other random events causing the measurement to be
polluted by random noise).
since what we are looking for here is a reasonable first approximation,
not perfection I don't think we should worry much about pollution of the
value. if the person has other things running while they are running this
test that will be running when they run the database it's no longer
'pollution' it's part of the environment. I think a message at runtime
that it may produce inaccurate results if you have other heavy processes
running for the config that won't be running with the database would be
good enough (remember it's not only CPU time that's affected like this,
it's disk performance as well)
As for 32/64 bit --- doesn't PG already know that information? I mean,
./configure does gather that information --- does it not?
we're not talking about comiling PG, we're talking about getting sane
defaults for a pre-compiled binary. if it's a 32 bit binary assume a 32
bit cpu, if it's a 64 bit binary assume a 64 bit cpu (all hardcoded into
the binary at compile time)
David Lang