Yes, in today's rich world pets and cattles have no definition :)
When I was working, we had 120+ pg clusters per env (all puppet managed, fdw, shards, multiple replicas, LR , PITR, and more) of size varying from 2GB to 1.5TB, and none were use and throw.
But I get your point, if we have many pg nodes given per app one db kind-of design, we need some kind of automation to scale to that level, and given k8s marketing and sidecar systems, I appreciate that opinion.
And it seems k8s can handle persistent storage based design well, much better than apache mesos.
Ofcourse, my exp in postgres dba role is less than 2 yrs, so I ask too many questions :), as I was mostly exposed to stateless services on container based environments. But what do I have to lose by asking:)
My point of concern was how pgs were tuned for heavy workloads in a shared environment. We used to tune kernel params based on typical workload requirement.
Autoscaling for pg is not the same as stateless systems. a connection bump requires a restart(Yes pgbounver helps but when apps autoscale, they hammer db hard), that restart has to be orchestrated in such a way that cluster lives or else the nodes shut down due to discrepancies in param values between primary and replica.
But since crunchy and Zalando both have operators, I think I should learn to do a deploy them in mini kube kind of a setup to play with my concerns.
Anyways, thanks for answering. That helped.