Hello!
I'm trying to support an application in production at work, and for some obscure reason the developer made it drop and re-create a column periodically.
I'm trying to support an application in production at work, and for some obscure reason the developer made it drop and re-create a column periodically.
I know this is a bad practice (to say the least), and I'm telling them to fix it, but after the 1600th drop/add cycle, PostgreSQL starts giving out the column limit error:
ERROR: tables can have at most 1600 columns
I reproduced this behavior in PostgreSQL 10.3 with a simple bash loop and a two-column table, one of which is fixed and the other is repeatedly dropped and re-created until the 1600 limit is reached.
To me this is pretty cool, since I can use this limit as leverage to push the developers to the right path, but should Postgres be doing that? It's as if it doesn't decrement some counter when a column is dropped.
Many thanks!
Bruno
----------
“Life beats down and crushes the soul and art reminds you that you have one.”
- Stella Adler |