Em 17/11/2013 19:26, Stefan Keller
escreveu:
Hi, Stephan, I don't think any feature you add to database server would bring obsolescence to app server caches: app server caches have just no lag at all: 1) Don't need network connection to database server 2) Don't need to materialize results (for instance, I have in mind a Java or .Net app server running hundred thousands of objects in memory). IMHO, no matter how much you improve database, app server caches provides additional level of speed that cannot be achieved by database. That said, I still can see huge improvements in database server. Having strong in memory operation would bring substantial improvements. For instance, if you have in-memory database (tables, indexes, etc) for all sort of queries, and just **commit** to disks, then you will have unprecedent performance. I would get benefit from this architecture, since typical customer database has < 64Gb on size (after 2 or 3 years of data recording). So, a database server with 64Gb of memory would keep everything in memory, and just commit data to disc. In this case, commited data would be instantly available to queries (because they are all in memory) while log (changes) is recorded in a fast disk (a SSD, perhaps) and then those changes are made persistent data, written async into slow massive disks (SCSI or SAS). This would allow also a hybrid operation (too keep as much data pages as possible in memory, with a target of 50% or more in memory). When database server is started, it would have lazy load (data is loaded and kept in memory as it is used) or eager load (for slower startup but faster execution). May be I'm just wondering too much, since I don't know PostgreSQL internals... Regards, Edson
|