On 6/3/20 3:21 PM, Martin Mueller wrote:
On the topic of what other databases do better: I much prefer Postgres
to Mysql because it has better string functions and better as well as
very courteous error messages. But MySQL has one feature that sometimes
makes me want to return it: it stores the most important metadata about
tables in a Mysql table that can be queried as if it were just another
table. That is a really feature. I makes it very easy to look for a
table that you edited most recently, including a lot of other things.
With a lot of caveats:
https://dev.mysql.com/doc/refman/8.0/en/show-table-status.html
" Update_time
When the data file was last updated. For some storage engines, this
value is NULL. For example, InnoDB stores multiple tables in its system
tablespace and the data file timestamp does not apply. Even with
file-per-table mode with each InnoDB table in a separate .ibd file,
change buffering can delay the write to the data file, so the file
modification time is different from the time of the last insert, update,
or delete. For MyISAM, the data file timestamp is used; however, on
Windows the timestamp is not updated by updates, so the value is inaccurate.
Update_time displays a timestamp value for the last UPDATE, INSERT, or
DELETE performed on InnoDB tables that are not partitioned. For MVCC,
the timestamp value reflects the COMMIT time, which is considered the
last update time. Timestamps are not persisted when the server is
restarted or when the table is evicted from the InnoDB data dictionary
cache. "
What are the lot of other things?
My guess is they can be found in information_schema.*.
Why doesn’t Postgres have that feature? Or is there a different and
equally easy way of getting at these things that I am just missing?
*From: *Andreas Joseph Krogh <andreas@xxxxxxxxxx>
*Date: *Wednesday, June 3, 2020 at 12:54 PM
*To: *Chris Travers <chris.travers@xxxxxxxxx>
*Cc: *"pgsql-generallists.postgresql.org"
<pgsql-general@xxxxxxxxxxxxxxxxxxxx>
*Subject: *Re: Oracle vs. PostgreSQL - a comment
På onsdag 03. juni 2020 kl. 20:07:24, skrev Chris Travers
<chris.travers@xxxxxxxxx <mailto:chris.travers@xxxxxxxxx>>:
[...]
Regardless of what Oracle does, I agree this would be a huge
step in the right direction for pg-DBAs.
I have absolutely no clue about how much work is required etc.,
but I think it's kind of strange that no companies have invested
in making this happen.
I manage database clusters where the number of databases is a reason
not to do logical replication based upgrades, where pg_upgrade is
far preferred instead.
If this were to be the case, I would be very concerned that a bunch
of things would have to change:
1. Shared catalogs would have txid problems unless you stay with
global txids and then how do local wal streams work there?
2. Possibility that suddenly streaming replication has the
possibility of different databases having different amounts of lag
3. Problems with io management on WAL on high throughput systems (I
have systems where a db cluster generates 10-20TB of WAL per day)
So I am not at all sure this would be a step in the right direction
or worth the work.
I agree these are all technical issues, but nevertheless -
"implementation details", which DBAs don't care about. What's important
from a DBA's perspective is not whether WAL is cluster-wide or
database-wide, but whether it's possible to manage backups/PITR/restores
of individual databases in a more convenient matter, which other
RDBMS-vendors seem to provide.
I love PG, have been using it professionally since 6.5, and our company
depends on it, but there are things other RDBMS-vendors do better...
--
Andreas Joseph Krogh
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx