Search Postgresql Archives

Re: Schema per user?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 7, 2009 at 4:59 PM, David Kerr <dmk@xxxxxxxxxxxxxx> wrote:
> On Thu, May 07, 2009 at 03:06:11PM -0600, Scott Marlowe wrote:
> - On Thu, May 7, 2009 at 2:12 PM, Anders Steinlein <anders@xxxxxxxxxxxx> wrote:
> - >
> - > On May 7, 2009, at 10:05 PM, Scott Marlowe wrote:
> - >
> - >> On Thu, May 7, 2009 at 1:45 PM, Anders Steinlein <anders@xxxxxxxxxxxx>
> - >> wrote:
> - > Generally though, what made you consider such a solution? Same advantages as
> - > I mentioned? One thing I'm a bit usure of how best to solve is where to
> - > place the "users" or some such table for authentication and other "shared"
> - > info -- simply in the "public" schema, perhaps?
> -
> - We're looking at a "schema per group" fit for a certain application
> - and we have lot of groups (in the 100,000 to 1,000,000 range.)  We're
> - also looking at partitioning to multiple db servers if needs be.  It's
> - a compelling app, and schemas allow us to have one copy of the master
> - user data etc and the app just has to have a different search path and
> - viola, we're integrated.
> -
>
> Interesting, we were looking at something similar but dismissed it because it
> seemed like a maintenance nightmare, instead we're planning on going with
> partitioning.

We're looking at both, eventually.  They're kind of two parts of the
same solution.  schemas make having many isolated users easy,
partitioning (across dbs with something like pl/proxy or in the app
layer) allows us to scale to millions or more schemas before things
start getting too slow.

> From a programming aspect, we're using JPA, anyone know if you can set search_path
> with JPA/JDBC?

Can you issue
set search_path=public,myschema;
as a command, just straight up sql?

We don't use java for much here, so I don't know.

> Also, how do you plan to handle schema updates in that model, inheritence?

Yeah, we're looking at using inheritance to allow us to update them
all at once.  But if we need to do more than that, we can always
automate it with scripts.

> You don't have a concern with dealing with 100,000 * n tables?

Of course I do.  But I kinda want to know when we start having issues.
 Our machines are pretty beefy, and we can double the memory to 64G
and will probably double (or more) the cores on them from 8 to 16 to
24 etc... over time.  So, as long as the catalog tables aren't truly
monstrous we should be ok.  But only benchmarking will really give us
an idea

> My background is with oracle, and in general it would have cleanup issues with
> tracking that many tables/segments. Does postgres just handle an insane amount
> of tables better?

I don't think pgsql can handle many more tables than oracle in terms
of performance, it's just way easier to maintain them, due to useful
features like inheritance and such.  And multi-element search_path
lets you mix > 1 schema for a user to see seamlessly.  you have to be
WAY more schema aware in writing an app in oracle, at least in the
older versions I've used.  Haven't touched 11g, and don't really plan
to unless I have to. :)

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux