Hello! I'm looking for a way to store quite a bit large amount of data in an efficient manner. There is about 8GB per day to store and I feel that approach I'm using now isn't what I want to end up with. Let me describe how I've tried to solve this problem before I ask my question in more detail. Data to be inserted consists of user requests to several sites coming with a few additional fields (like user id). At first I decided to store all the requests coming to my server in one table called "request". The db structure was as follows: CREATE TABE site ( site_id varchar(80) PRIMARY KEY, -- <skipped> ) CREATE TABLE request ( user_id int PRIMARY KEY, site_id varchar(80) REFERENCES site, -- <skipped> ) Actually it wasn't the really good idea to store millions of requests in only one table, because the index of "request" table doesn't seem to work fast enough on so much data. Now I'm using separate table for each site to store requests. Those tables are created dynamically by the client application when command "create_site" is called. Unfortunatelly there is one disadvantage coming with this solution which I don't like: features aimed to control reference integrity (like REFERENCES constraint) are no longer my friends. Is there a way that mixes the advantages (reference integrity + good performance) of those ones I listed above? I started learning PostgreSQL/SQL some weeks ago, so there may be obvious solution I haven't ever hear about. Please point out it. Thanks. -- Sergey Samokhin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general