> \On Jul 28, 2021, at 11:18 PM, Vijaykumar Jain <vijaykumarjain.github@xxxxxxxxx> wrote: > > What was the real need to have postgresql cluster running in kubernetes? When you have everything else running in K8s, it's awkward to keep around a cluster of VMs just for your db--and running on dedicated hardware has its own tradeoffs. > Now with k8s, every time the server sneezes, the pods would get spawned onto different servers that would result in a full resync unless volumes could be moved. You either have shared network volumes (persistent volume claims in K8s terminology), in which case the migrated server re-mounts the same volume. Or you can use local storage in which case the servers are bound to specific nodes with that storage and don't migrate (you have to manage this a bit manually, and it's a tradeoff for likely higher-performing storage). Also, what makes you think the server will "sneeze" often? I cannot remember the last time postgres quit unexpectedly. > Since containers are now in a shared environment, and if it is mostly over committed, then tuning of various params of a instance would be totally different compared to what was it on a dedicated vm. We don't find params to be different, but we are not really over-committed. > Noisy neighbour's, a typical heavy activity like a bot attack on some services, which do not touch the db that is on the same server will have a serious impact due to shortage of resources. This is not really different than VMs. You either are able to manage this reasonably, or you need dedicated hardware. > in our case dns was under huge stress due to constant bouncing of services and discovery compared to original monoliths but were not tuned to handle that amount of changes and suffered stale cache lookups. For apps it would be OK, as they implement circuit breakers , but intra pg setup for barman or logical replication or pgbackrest would suffer a longer outage ? You needed to fix your services. If your DNS is overloaded because your apps are moving so much, then something is terribly, terribly wrong. Any way, your postgres instances certainly should not be moving, so stale lookups should not be a problem, even in such a circumstance. > Lastly, shared resources resulting in a poor query plan, like slow vacuuming may degrade the db. Shared resources can slow things down, but I have no experience of that affecting what the appropriate query plan should be. > Now, I have 0 exp in kubernetes, but I tried to understand the basics and found most of then similar to apache mesos. My use case was dbs grow and they grow really fast so they cannot be the same as immutable containers , but idk. Like when in need of a increased memory, it was OK to have do that for a vm and reboot, but for a pod, there is a risk of deployment and moving the instance on another server ? Or else all the pods that server would get bounced? As discussed above, movement of pods is not the problem you think it is. Network volumes, no problem moving; local storage, can't move.