These tables are created and maintained automatically by
software. ÂA table may be created with some initial columns and as respondents
take a survey and we collect data each data point is stored in a column and
these columns are created on the fly by the application. There is no need to "maintain" it as the software does
that for us. While it may seems like bad practice it has actually worked out
very well for us for years. When we reach the column limit we create sub tables
but when reporting on this data it becomes an issue because you have to know which
data point is in what sub table, that is what I am trying to get around. I
think it's very obvious that Postgres developers have no interest in going over
1600 columns in the foreseeable future and which forces us to find creative ways
around it but I just don't see why it has to be this way. Even view which are
not even stored entities have this limit. ÂI'm sure there are good reasons in
terms of how the Postgres code works which is way over my head but when you
look at the comparison of database limitations ( http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems#Limits
ÂÂ) it seems that in every area expect column name size and number of columns
postgres has huge advantages over other systems. Â From: Dmitriy Igrishin
[mailto:dmitigr@xxxxxxxxx] I can't imagine how to maintain
a database with tables with 2010/11/13 Clark C. Evans <cce@xxxxxxxxxxxxxx> On Fri, 12 Nov 2010 21:10
+0000, "Dann Corbit" wrote: Even if you "partition" the columns in the
instrument
|