David's answer is right. Basically every column added gets an index # which is not recycled. I just want to add that a dump/restore will not bring in the history of deleted columns, thus resetting the column counter for the table. -- Scott Ribe scott_ribe@xxxxxxxxxxxxxxxx https://www.linkedin.com/in/scottribe/ > On Jun 6, 2018, at 10:51 AM, David G. Johnston <david.g.johnston@xxxxxxxxx> wrote: > > On Wed, Jun 6, 2018 at 9:39 AM, nunks <nunks.lol@xxxxxxxxx> wrote: > I reproduced this behavior in PostgreSQL 10.3 with a simple bash loop and a two-column table, one of which is fixed and the other is repeatedly dropped and re-created until the 1600 limit is reached. > > To me this is pretty cool, since I can use this limit as leverage to push the developers to the right path, but should Postgres be doing that? It's as if it doesn't decrement some counter when a column is dropped. > > This is working as expected. When dropping a column, or adding a new column that can contain nulls, PostgreSQL does not, and does not want to, rewrite the physically stored records/table. Thus it must be capable of accepting records formed for prior table versions which means it must keep track of those now-deleted columns. > > I'm sure that there is more to it that requires reading, and understanding, the source code to comprehend; but that does seem to explain why its works the way it does. > > David J. >