On Aug 14, 2008, at 1:00 PM, juliano.freitas@xxxxxxxxxxxxx wrote:
We're developing a project which uses PostgreSQL to store binarydocuments. Since our system is likely to grow up to some terabytes in twoyears, I'd like to ask if some of you have had some experience with storing a huge amount of blob files in postgres. How does it scale in performance?
It depends on your access patterns. If this is an OLTP database, you need to think really hard about putting that stuff in the database, because it will seriously hurt your caching ability. If we had the ability to define buffersize limits per-tablespace, you could handle it that way, but...
Another consideration is why you want to put this data in a database in the first place? It may be convenient, but if that's the only reason you could be hurting yourself in the long run.
BTW, after seeing the SkyTools presentation at pgCon this year I realized there's a pretty attractive middle-ground between storing this data in your production database and storing it in the filesystem. Using plproxy and pgBouncer, it wouldn't be hard to store the data in an external database. That gives you the ease-of- management of a database, but keeps the data away from your production data.
-- Decibel!, aka Jim C. Nasby, Database Architect decibel@xxxxxxxxxxx Give your computer some brain candy! www.distributed.net Team #1828
Attachment:
smime.p7s
Description: S/MIME cryptographic signature