Search Postgresql Archives

Re: Equivalent for AUTOINCREMENT?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michelle Konzack wrote:

> The ONLY real option would be, go to a HostingProvider which  VERY  good
> and reliabel Internet connectivity and install there my WHOLE PostgreSQL
> database and let my other websites access them over the internet...

... making them subject to problems with transit between the host
handing PostgreSQL and the host handling the rest of the work. You need
every hop of the main routes between the Pg host and all your other
providers to work reliably, with stable and reasonably low latency, all
the time. This isn't very likely.

You'll also have big latency problems, and if your design doesn't use
any more powerful database features you probably do LOTS of round trips,
lots of read-modify-writes, etc, and you'll suffer a horrible impact on
performance.

> But I have not found a singel Hosting-Provider which is willing to host
> a couple of Databases for which I need 6 TByte of diskspace...

You would probably need dedicated co-located hosting where you provide
the machine and they provide rack space, stable power, an Ethernet port
or two, access to a console server with remote power cycle capability,
and hopefully access to some backup storage.

You'd have a hard time fitting 6TB of fast, reliable storage even in a
5RU enclosure (say 24 300GB SAS disks in a high-density storage server
chassis) so you'd probably need an external FC or SAS storage enclosure,
probably as well as internal storage.

$LOTS.

On the other hand, you expect $LOTS if you need to store 6TB of data
with fast, reliable access to it.

At least with SAS becoming so standard you don't need to worry about
fibre channel anymore.

If you don't need all of your data to be accessible particularly
quickly, you can save megabucks by using a set of 1TB SATA disks for
your bigger, more rarely used stuff. If you don't have too many big
indexes you might be able to keep them on the SAS disks for faster access.

> A dedicated Root-Server with 300 GByte SAS in Raid-1 without Hotfix cost
> alread 150 Euro per month.

Where is it hosted? At that price, I want one.

> Can you recommend to host the database on a  seperated  machine  in  the
> Internet at another ISP?

Personally, I wouldn't do it, because of the aforementioned problems
with latency and with unstable transit. I guess it depends on your
price/performance/reliability tradeoff requirements.

>> If you want to impose strong data intregrity checking as part of your 
>> schema, and handle violations of those checks cleanly in your 
>> application, then you'll probably be doing database specific things 
>> there too.
> 
> This why I have asked in another thread on <lists.php.net> how  to  make
> an "API" for my programs which then can be adapted to the specific DB.
> 
> Since I have no need for complex operation

If you host your database somewhere far enough away from the application
that you're seeing several-hundred-millisecond latencies then you WILL
need complex operations. You will need to do as much work as possible on
the database end, avoid read-modify-writes (but you should do that
anyway), use SQL or PL/PgSQL functions to batch operations together, etc.

>> So ... targeting a specific database isn't all bad, so long as you think 
> 
> Right, but there are MANY application  which  are  realy  simple,  where
> database  access  can  easyly  ported  but  they  HARDCODE  inside   the
> applications the database functions instead  of  exporting  them  to  an
> INCLUDE for example and of course, without anny comments...

Yeah, I agree that's pretty objectionable. Whether it's bad because the
database access is simple (ie they're not using the database's ability
to do lots of the hard work for them) or because they've mixed it in
with the rest of the code I don't know.

I find I'll look at some application and see an "abstraction layer" that
results in the application doing things like:

x = get_value('field', 'table');
x ++;
set_value('field', 'table', x);

ie

SELECT field FROM table;
-- returns `42'
UPDATE table SET field = 43;

... which just makes me want to cry. Concurrency issues? Latency? Argh.
And that's a simple one; it gets WAY better once they come up with their
own procedural methods of doing what could be done with an UPDATE ...
FROM ... WHERE or the like.

That said, I'll confess to using Hibernate (a Java ORM) in large parts
of one of the apps I'm working on at present. It does a pretty good job,
and it's sane enough that you can bypass and use SQL (via the JDBC APIs)
  when you have work to do that it's not well suited for, such as bulk
updates or reporting queries.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux