Hi All, But there is more to it.
In our db, (main job) we define users individually, so we can do things
like : a user does not get the results he/she expects or claims has
otherwise some problem.
We can ps aux | grep user, or select pid from pg_stat_activity where ,
then e.g ALTER ROLE SET log_statements = true or just enable for the
whole system for a while then kill -HUP his/her pid , grab our logs,
reset the settings to default investigate.
Other scenario, we just run top, and see currently the users with the
highest cpu/disk load. (apart from all monitoring).
Are there alternatives to this when someone runs a slim / stripped
version of the OS in a docker image? Or does he/she needs to sacrifice
the above ?
I am not talking about a Kubernetes scenario (for which I have no
experience) , just plain docker.
On 3/10/25 12:56, Achilleas Mantzios - cloud wrote:
On 3/10/25 12:11, Laurenz Albe wrote:
On Mon, 2025-03-10 at 09:28 +0200, Achilleas Mantzios - cloud wrote:
[doesn't think running PostgreSQL in containers in production
is such a hot idea, but sees the concept going mainstream]
What are your thoughts ? I am puzzled because while I used to hear many
skeptical opinions until some years ago, now the trend seems to more on
the "acceptance" or neutral side.
Well, lots of people think it is a great idea to host their important
database in a public cloud. Fashions are not necessarily based on
wisdom.
Using Kubernetes for test and play databases that you create and destroy
regularly is a great thing.
Using Kubernetes to squish many small databases on a single machine
while managing the resource usage can be useful.
If you use Kubernetes for everything else and it makes monitoring easy
for you, it may make sense to run a production database that way.
Running your database on Kubernetes will make database administration
and troubleshooting more cumbersome and will require you to create
special
containers for the purpose of upgrading. If these disadvantages are
outbalanced by the above advantages, it may make sense.
If you plan to run serious databases on Kubernetes, you better have
dedicated nodes for that purpose, so that you can tune the kernel
parameters.
Thank you Laurenz,
Those friends of mine are PgSQL noobs (hence the choice to use
docker), and have no plans AFAIK to deploy kubernetes in the near (or
distant) future. So to say my opinion on the advantages one by one :
- they dont create and drop DBs regularly, e..g I upgrade from 14.* ->
17 yesterday so this DB was live for some years now.
- they have a few small DBs for the moment, with one being the main,
so no need for squishing either
- they have no kubernetes running or any k8s plans for the future that
I know of.
For all those reasons, and while they still learning the basics of
PgSQL , I dont think docker is a good idea. Plus they dont have a DBA
(apart from me which I kinda work in a volunteer basis), and when I
eventually leave them, I would like their system to be in a good shape
for the next one.
Yours,
Laurenz Albe