I would always use the latest stable release from the PG project for
four reasons:
a) Databases are notoriously difficult to upgrade once they are in
production and have live data. This is especially true if, like us at
kieser.net, you run PG databases that are 24/7 business critical and in
constant, high volume use. Taking down a database for upgrading just is
not an option if only because of the risk that something, however
innocuous it may seem in the release notes, may cause a live, production
level data fault. The worst scenario is that this happens and it's not
detected. So, if you have the chance now, get on the latest version.
b) The dev team just keeps on making this wonderful DB better. Every
release is a gem and well worth having.
c) PG runs on RedHat no problems. Besides which, in the real world, if
something goes wrong, chances are you will have to fix it yourself
anyway, or asking us here on this list how to fix it.. In the real
world, a support contract is nearly never used and in OpenSource, Google
and lists such as this are far more efficient in getting answers than
most commercial support lines! I know I could get flamed for this, but
just passing on my experience.
d) Most importantly: You WILL be testing the new PG release in
addition to the new servers that you intend to roll out THOROUGHLY
because you roll out new kit, won't you? You never, ever roll out
anything in production without having properly QAed it, and I am very
certain that you will be doing this. nudge. prod.
Incidentally, we use Xen and are rolling out DRBD for disk-level
mirroring. Xen makes life so much easier... you simply migrate the live
machine to another bit of hardware in the event of a fault or
maintenance requirements and the performance is near native. It also
means that if you need to do heavy handed load balancing you can, but
best of all, you can create and test a Xen installation, then use it as
the source of clones... so that you are absolutely guaranteed that the
machine you have tested is the set of machines that you roll out. That
alone is worth running all production machines in Xen. Money simply
cannot put a value on that peace of mind.
Brad
Devrim GUNDUZ wrote:
Hi,
On Tue, 2006-02-07 at 18:09 -0500, Colin Freas wrote:
My argument is we should use the latest stable version of Postgres.
His take is we ought to use the latest version provided by Red Hat.
(This is for a set of Red Hat Enterprise boxes.)
AFAIK, if you want support from Red Hat, you have to use the packages
provided by Red Hat. If you use any 3rd party or unsupported packages,
they won't support your system. So it depends on you.
We provide RPMs for RHEL boxes:
http://www.postgresql.org/ftp/binary/
Also Command Prompt announced a PostgreSQL distribution that has RPMs
for RHEL 3 and 4:
http://www.mammothpostgresql.org/
One point of contention in this argument seems to be the notion that
Red Hat ports security fixes to older versions, even if it has to do
this itself. I don't necessarily believe that this happens. That is,
imagine that there's some fix that makes it into the 8.x branch. For
whatever reason, this doesn't go into 7.x. Red Hat is still using the
7.x branch, so it undertakes to do this work itself. Does that sort
of thing really happen?
Red Hat is still using 7.X in RHEL because 8.0 was very fresh when RHEL
4 was released and I think they thought that it is not tested enough for
Enterprise Linux.
Also, Red Hat does not update to new major version via up2date. This is
their policy that I strongly support.
Is there a general performance improvement from 7 to 8? What about
reliability improvements?
Sure. 8.0 was a revolutinary step.
For how long is the 7.x branch going to be under maintenance and
development (by the community, not be Red Hat)? Is there even a time
frame?
7.2 is now unsupported. Since Red Hat uses 7.3 in RHEL3, they (and so
Tom Lane, core developer of PostgreSQL) will continue supporting it.
That means 7.3 will be supported at least 2-3 years more. I'm not sure
about the exact EOL of 7.4.
Regards,