Hi Dave, How many rows of data are we talking here and how much information? (GiB) Are you able to provide the table definition? (can normalisation partition off some of this data?). Have you addressed dedicated options for lookup data, tune the database appropriately and keep that single table? With postgres we have schemas, so it can provide some convenience and design options. So look at it in terms how your query will look. SELECT schema.table.column FROM schema.table; vs SELECT schema_table.column FROM schema_table; Not much in it? However, I tend to go with partitions when required to be generated on demand dynamically and automatically (which probably isn't the case here). SCHEMAs have other uses, provide a level of security (GRANT) and useful in design when partitioning off blocks of related datasets completely. Regards, Julian On 19/09/13 17:02, Dave Potts wrote: > Hi List > > I am looking for some general advice about the best was of splitting > a large data table,I have 2 different choices, partitioning or > different schemas. > > The data table refers to the number of houses that can be include in a > city, as such there are large number of records. > > > I am wondering if decided to partition the table if the update > speed/access might be faster that just declaring a different schema > per city. > > Under the partition the data table would appear to be smaller, so I > should get an increase in speed, but the database still have to do > some sort of indexing. > > If I used different schemas, it resolves data protection issues, but > doing a backup might become a nightmare > > In general which is the fast access method? > > regards > > > Dave. > > > > > -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general