TLDR; If I spend the time necessary to instrument the many functions that are the equivalent of the Oracle counterparts, would anyone pull those changes and use them? Specifically, for those who know Oracle, I'm talking about implementing:
- The portion of the ALTER SESSION that enables extended SQL trace
- Most of the DBMS_MONITOR and DBMS_APPLICATION_INFO packages
- Instrument the thousand or so functions that are the equivalent of those found in Oracle's V$EVENT_NAME
- Dynamic performance view V$DIAG_INFO
For the last 35 years, I've made my living helping people solve Oracle performance problems by looking at it, which means:
Trace a user experience and profile the trace file to (a) reveal where the time has gone and its algorithm and (b) make it easy to imagine the cost of possible solutions as well as the savings in response time or resources.
I've even submitted change requests to improve Oracle's tracing features while working for them and since those glorious five years.
Now looking closely at postgreSQL, I see an opportunity to more quickly implement Oracle's current feature list.
I've come to this point because I see many roadblocks for users who want to see a detailed "receipt" for their response time. The biggest roadblock is that without a lot of automation, a user of any kind must log into the server and attempt to get the data that are now traditionally child's play for Oracle. The second biggest roadblock I see is the recompilation that is required for the server components (i.e., postgreSQL, operating system). My initial attempts to get anything useful out of postgreSQL were dismal failures and I think it should be infinitely easier.
Running either dtrace and eBPF scripts on the server should not be required. The instrumentation and the code being instrumented should be tightly coupled. Doing so will allow anyone on any platform for any PostgreSQL version to get a trace file just as easily as people do for Oracle.