I am using a similar solution, and I tested it with a test containing 20K+ different schemas. Postgres didn't show slowness at all even after the 20K (over 2 million total tables) were created. So I have feeling it can grow even more. Guy. On 9/28/06, snacktime <snacktime@xxxxxxxxx> wrote:
I'm re evaluating a few design choices I made a while back, and one that keeps coming to the forefront is data separation. We store sensitive information for clients. A database for each client isn't really workable, or at least I've never though of a way to make it workable, as we have several thousand clients and the databases all have to be accessed through a limited number of web applications where performance is important and things like persistant connections are a must. I've always been paranoid about a programmer error in an application resulting in data from multiple clients getting mixed together. Right now we create a schema for each client, with each schema having the same tables. The connections to the database are from an unprivileged user, and everything goes through functions that run at the necessary privileges. We us set_search_path to public,user. User data is in schema user and the functions are in the public schema. Every table has a client_id column. This has worked well so far but it's a real pain to manage and as we ramp up I'm not sure it's going to scale that well. So anyways my questions is this. Am I being too paranoid about putting all the data into one set of tables in a common schema? For thousands of clients what would you do? Chris ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org
-- Family management on rails: http://www.famundo.com - coming soon! My development related blog: http://devblog.famundo.com