Sorry for this reply, but I feel it is necessary to make it clear what is reality and what is FUD against Oracle from Paul's e-mails in this thread...
(Note: I work for Oracle now, but I've had 20 years experience as multi-platform database consultant)
Paul Förster <paul.foerster@xxxxxxxxx> wrote:
> Oracle requires 161 additional packages to be installed, many of which are 32-bit packages, for a supposedly 64-bit only software! This results in 150 MB additional disk space needed and swamps the system with 32-bit packages!
That is... not a problem. Is it, for real?
> The oracle installation process is horrible.
that's why I scripted the whole create database thing, including PDBs, and their parameters, file paths, etc. For example, my script to create a container database is 782 lines long, whereas PostgreSQL just needs an "initdb". And my script to create a PDB still has 277 lines whereas in PostgreSQL, you can do it with a simple "create database" line.
Although I completely agree that the Oracle installation process is much longer and more complex than PostgreSQL, I disagree with the rest.
The CREATE PLUGGABLE DATABASE is also a single line SQL command... The scripts to create a PDB or a PostgreSQL database depend a lot on what do you want to achieve (empty database? specific users or permissions? sanity checks? pre-emptive backup? add to cmdb?)
For a new PostgreSQL architecture in the past I have written 230 lines of code to automate the database creation in an existing PostgreSQL cluster. That included setting up application users, hardening the default permissions on the public schema, registering in the CMDB, etc. It is not much code in my opinion and it is done once for all.
For a similar project with Oracle Multitenant, the create_pdb.sh was 177 lines of code, including dealing with TDE wallets and CMU authentication. Again, not that much IMO.
Even moving a database to another path is a nightmare as you'd have to create new controlfiles, etc. With PostgreSQL you just change the PGDATA variable after moving/copying the whole database cluster and that's it. Well, if you copy it and want to run both at the same time, you still have to change the port in postgresql.conf of course.
This is bashing FUD against Oracle or lack of basic Oracle knowledge. Oracle online move, reorganization and patching capabilities are far ahead from PostgreSQL.
Online Datafile Movement has existed since 12cR1. 8 years! https://oracle-base.com/articles/12c/online-move-datafile-12cr1
Prior to that, for many years, it was possible to offline, move, rename and online datafiles, either grouped or singularly, without stopping the instance. Online logs can be rotated to a new location online. The only exception are the controlfiles that require an ALTER SYSTEM, shutdown, move, startup. PostgreSQL must be stopped in order to move the database to a new path, and if it is to a new filesystem, you need the time for a full copy of the data, unless you do it via backup and recovery to reduce the downtime.
> it works well if the length of path+filename does not change. I had bad experiences with this technique if the length changes because controlfiles are binary files unless you alter database backup controlfile to trace as '...'. So, as I said, you need to recreate the controlfile.
Again no, you don't need to recreate the controlfile for moving the datafiles , and no: altering binary controlfiles with `sed` is nothing a production DBA would ever do...
> The way they handle indexes and updates is also much faster than postgres and you have to worry less about the structure of your queries with respect to performance.
and then, some day, a developer approaches a DBA with a query which is generated and, if printed out in a 11pt. sized font, can fill a billboard on a street, to optimize it or search for what's wrong with it, or why it performs so slow... That's usually when I play BOFH because I'm not willing to debug 10 pages which its creator hasn't even cared to take a look at first. :-P :-)
The laziness or lack of knowledge of your developers is not a problem with Oracle technology. Still, you can get a "query which is generated and, if printed out in a 11pt. sized font, can fill a billboard on a street", give it to Oracle and get the optimal execution plan 99.9% of the times. And if the execution is not optimal, Statistics Feedback kicks in and tries to produce a better one next time. And if it still fails, you can use hints or produce a trace 10053 and pin-point the reason for the CBO choice and get better statistics (or physical structures) for it.
> > Comparing Postgres with Oracle is a bit like comparing a rubber duck you might buy your three year old, with a 300000 ton super tanker.
> yes, and no. You are right about Oracle having gazillions of features but your comparison is way too drastic.
> But be honest: How many features do you actually need? Most people use create table, view, sequence, index and that's basically it. Few use XML tables, Java inside the RDBMS, some (unfortunately) use Oracle Text. Many use BLOBs (instead of CLOBs) to mitigate the varchar2(4000) problem. Bottom line, most applications happily perform (even much better) on not so huge monsters.
> yes, and no. You are right about Oracle having gazillions of features but your comparison is way too drastic.
> But be honest: How many features do you actually need? Most people use create table, view, sequence, index and that's basically it. Few use XML tables, Java inside the RDBMS, some (unfortunately) use Oracle Text. Many use BLOBs (instead of CLOBs) to mitigate the varchar2(4000) problem. Bottom line, most applications happily perform (even much better) on not so huge monsters.
Seriously, I agree to a part of this. I would never buy something as expensive as Oracle for a basic CRUD application, PostgreSQL does the job very well.
With Oracle you buy additional features that go beyond that and it is up to you to evaluate what you will be missing when using PostgreSQL instead (you might say "not much for the price", I would not agree),
and if you will still be able to respond to your business requirements.
Because with Oracle you don't buy a good DBA experience (or a good developer experience, even if I think that the developer experience with Oracle is actually pretty good), you buy an enabler for specific business critical requirements: availability, performance, concurrency... PostgreSQL has its solutions to that but I would not challenge Oracle on this... For example, I would never trade PostgreSQL Transactional DDL (the #1 popping-out
missing
Oracle
feature compared to PGSQL) for Edition-Based Redefinition or Oracle Flashback capabilities. Transactional DDLs improve the dev experience, but not necessarily the business.
> yes, but why do I need a huge hex block section in some trace file? Only Oracle can read that anyway. I don't have that with PostgreSQL because I don't need it.
Trying to minimize the value of Oracle instrumentation is as FUD as it can be... Oracle not only gives you the instruments to understand what's happening, but also what's happened in the past.
Oracle gives you a way to answer to this question: "Why my query was slow last night and how can I be sure that it will run faster tonight?" with other than "I don't know". And probably the original statement would have been "we have lost money because the application was non-responsive, this must not happen again". Instrumentation is knowledge. If you are not able to read it, it's not an Oracle Technology problem. PostgreSQL gives you the source code that is obviously better, but it gives different answers for different problems.
> I don't need something like "alter session set events '10046 trace name context forever'" and learn that by heart. Why should I?
I used to have procedures to create
10046
traces and analyze them. Again, instrumentation is knowledge.
> Why not just limit the downtime as drastic as can *easily* be done with PostgreSQL in the first place without the whole setup nightmare that Oracle requires? I've been asking myself that for ages and always wondered why it couldn't be just as easy as it is with PostgreSQL.
I completely agree with that. But with RAC you have online patching, and with ADG you have rolling upgrades with almost no downtime. Not for the instance, but for the business. (I know, more money, but still it is possible).
--
Ludovico