Il giorno gio 29 apr 2021 alle ore 19:13 Paul Förster <paul.foerster@xxxxxxxxx> ha scritto:
nothing of it was a FUD. It was a comparison done on a single machine. Then, I drew my conclusions from that and added my personal view. You don't necessarily havet to agree to my opinion nor did I ask you to agree. But it's definitely not FUD!
Features are not an opinion. I am not trying to convincing you that Oracle is better than PostgreSQL (postgresql official mailing lists are not a good place for that ^^)
But I can't stand when people advocate against Oracle (or FWIW, whatever technology) using, among understandable arguments, also false claims.
Oracle is "heavy" (but fast in application performance), it takes time to install it. Stopping and starting the instances takes time. Patching can be painful if you have encountered too many bugs in the past and need to merge the patches. It is the most expensive database in the world (at least, looking at what you pay and not what you get). It is complex for the DBAs and the learning curve gets steeper and steeper with more and more features added at every release.
All these points are true.
Now, let's keep this momentum and continue with more incontestable truth:
> Online Datafile Movement has existed since 12cR1. 8 years! https://oracle-base.com/articles/12c/online-move-datafile-12cr1
yes, I know. But did you try to move SYSTEM, UNDO or TEMP tablespace or online redo log files? Did you try to move the *whole* database? You can move all data/index tablespace files with that (one by one which is tiresome with many files), but you can't move the essential tablespace files! Well, you can move the online reado log files by creating new ones and dropping the old ones but that's about it. You still can't move the essential tablespace files. I admit that I didn't try that with 19.x but it wasn't possible up to now.
Tried? I blogged about it 8 years ago: http://www.ludovicocaldara.net/dba/oracle-database-12c-move-datafile-online/
And I use this feature extensively like tons of DBAs out there.
Some more examples:
--- UNDO, move online back and forth
SQL> alter database move datafile '+DATA/_MGMTDB/DATAFILE/undotbs1.279.1071175797' to '/tmp/undotbs1.dbf';
Database altered.
Database altered.
--- SYSTEM, move online back and forth
SQL> ALTER DATABASE MOVE DATAFILE '/u02/app/oracle/oradata/cdb1/system01.dbf' TO '+DATA';
Database altered.
SQL> alter database move datafile '/tmp/system01.dbf' to '+DATA';
Database altered.
Database altered.
-- TEMPFILE: add a new one and drop the old one
SQL> alter tablespace temp add tempfile '/tmp/temp01.dbf' size 50M;
Tablespace altered.
SQL> alter database tempfile '+DATA/_MGMTDB/TEMPFILE/temp.284.1070901055' offline;
Database altered.
SQL> alter database tempfile '+DATA/_MGMTDB/TEMPFILE/temp.284.1070901055' drop including datafiles;
Database altered.
Tablespace altered.
SQL> alter database tempfile '+DATA/_MGMTDB/TEMPFILE/temp.284.1070901055' offline;
Database altered.
SQL> alter database tempfile '+DATA/_MGMTDB/TEMPFILE/temp.284.1070901055' drop including datafiles;
Database altered.
> Well, you can move the online reado log files by creating new ones and dropping the old ones but that's about it.
what do you mean... "but that's about it"? redo logs are not datafiles, new ones are created when the old ones are full, just like WAL files. You decide where to put them and just archive the old ones.
And if I want to move an entire database... with ASM, 100% with Oracle technology, you can switch from a storage to another one without instance downtime or brownouts. Or you can do some junior-level scripting and do the online datafile movement automatically between different filesystems.
Control files (and only them) are the only thing that you cannot move without bouncing. Truth. (it's in my blog as well). But with ASM you can change disks online.
> PostgreSQL must be stopped in order to move the database to a new path, and if it is to a new filesystem, you need the time for a full copy of the data, unless you do it via backup and recovery to reduce the downtime.
that's not true. pg_basebackup it while running to a new destination. Set up primary_conn_info and replication and start up the copy. Once it's in sync and you have a physical copy, change the port in postgresql.conf of the copy, stop both and then only launch the copy. Promote it then. The switch takes 2-3 secs of downtime.
I did say the truth and I quote myself again in case you have skipped my sentence: "unless you do it via backup and recovery to reduce the downtime." That's what a replica is. A backup that you keep recovering until you switch to it.
This applies for Oracle as well, BTW, in case you want to relocate to another server. It might be longer to relocate, but there are technologies that make it transparent to the application (complex to implement, yes, but still they exist).
again no, you can't move SYSTEM, UNDO and TEMP! Also, what I mentioned was a quick and dirty hack and is not recommended.
Read above. Read the doc. Try it yourself. Ask your friends. Do something but stop telling that it's not possible.
but that makes 99.9% of all applications, no matter how big databases become during their lifetime. It's like the typical M$ Office argument: "everyone has it, so it's used in this company too!" Ok, this is whataboutism, but it applies here.
Amen. I have never said that Oracle is a solution for everything and I will never do.
with PostgreSQL you can install any extension you want or need, or even write them yourself. Your point being? Throwing more millions at Oracle? It can't even authenticate LDAP or Windows AD users without buying a very expensive directory service (OID, OUD, or whatever) from them.
That has been a big point in favor of PostgreSQL for longtime. (I have even presented my PostgreSQL experiences publicly in the past and this was one of the points). Now with Oracle CMU (central-managed users) also Oracle can authenticate to AD, without additional options. But that's from 18c so I can understand if you don't know it.
> Oracle not only gives you the instruments to understand what's happening, but also what's happened in the past.
[..]
> PostgreSQL gives you the source code that is obviously better, but it gives different answers for different problems.
there's the brochure again. Look at PostgreSQL's single most important log file and you have it all, depending on what logging you have enabled or disabled.
"The single most important log file"! PostgreSQL has so many extensions that can increase the instrumentation
and give you insights! If you want to become a PostgreSQL expert you should do better than that.
pg_sentinel, pg_stat_kcache, pg_qualstats are some of my favorites that I used to compile even in production. I would suggest to give them a try and start experiencing something better than just consuming passively the logs, at least when it comes to troubleshooting.
> I used to have procedures to create 10046 traces and analyze them. Again, instrumentation is knowledge.
again, why need that at all?
If you have not ever needed it, you have never experienced a serious performance problem in production (not with PostgreSQL if you have been limited to reading log files. gdb, strace, ptrace, perfmon? does it ring any bell?)
Oh, and there's still the LONG datatype, which Oracle can't seem to get rid of in its own database, because it's so bloated but recommends people to not use it for almost 20 years now. SCNR.
And with Oracle, you still can't rename users, a feature that DBAs have been asking for for decades. And there's more stuff, but I'm tired now.
True. And so am I.
Botton line: You don't work *at* Oracle, you don't work *with* Oracle, you work *for* Oracle as an evangelist. That's why this is my final mail in this thread. I don't discuss religion.
I joined Oracle 6 months ago. I've worked 20 years with Oracle, MySQL, PostgreSQL, SQL Server, DB2, Sybase... Believe me when I say that I know these products inside-out. Don't judge me only because of my current employer, that is YOUR prejudice.
Cheers
--
Ludovico