James Mello wrote:
Unless there was a way to guarantee consistency, it would be hard at
best to make this work. Convergence on large data sets across boxes is
non-trivial, and diffing databases is difficult at best. Unless there
was some form of automated way to ensure consistency, going 8 ways into
separate boxes is *very* hard. I do suppose that if you have fancy
storage (EMC, Hitachi) you could do BCV or Shadow copies. But in terms
of commodity stuff, I'd have to agree with Merlin.
If you're talking about data consistency, I don't see why that's an
issue in a bulk-load/read-only setup. Either bulk load on 1 server and
then do a file copy to all the others -- or simultaneously bulk load on
all servers.
If you're talking about consistency in directly queries to the
appropriate servers, I agree that's a more complicated issue but not
unsurmountable. If you don't use persistent connections, you can
probably get pretty good routing using DNS -- monitor servers by looking
at top/iostat/memory info/etc and continually change the DNS zonemaps to
direct traffic to less busy servers. (I use this method for our global
load balancers -- pretty easy to script via Perl/Python/etc.) Mind you
since you need a Dual Processor motherboard anyways to get PCI-X, that
means every machine would be a 2xDual Core so there's enough CPU power
to handle the cases where 2 or 3 queries get sent to the same server
back-to-back. Of course, I/O would take a hit in this case -- but I/O
would take a hit in every case on a single 16-core mega system.
If use persistent connections, it'll definitely require extra
programming beyond simple scripting. Take one of the opensource projects
like PgPool or SQLRelay and alter it so it monitors all servers to see
what server is least busy before passing a query on.
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend