Thomas Kellerer wrote:
Merlin Moncure wrote on 05.04.2007 23:24:
I think most reasons why not to store binaries in the
database boil down to performance.
Having implemented an application where the files were stored in the
filesystem instead of the database I have to say, with my experience I
would store the files in the DB the next time. Once the number of files
in a directory exceeds a certain limit, this directory is very hard to
handle.
Things like "dir", or "ls" or listing the contents through a FTP
connection become extremely slow (using HP/UX as well as Windows).
This is very true - I've ended up with data stores containing directory
hierarchys to handle this issue:
1/
1/
...
16/
2/
1/
...
16/
3/
...
16/
And so on. The more files, the more directories. The files are then
stored in the lower level directories using an appropriate algorithm to
distribute them fairly equally.
And you have to backup only _one_ source (the database), not two. Moving
the data around from system a to system b (e.g. staging (windows) ->
production (HP/UX)) is a lot easier when you can simply backup and
restore the database (in our case it was an Oracle database, but this
would be the same for PG)
Well this is the big problem - on a busy system your database backup can
easily become out of sync with your filesystem backup, coupled with
which, you have no automatic transactional control over anything your
store in the file system.
Consequently, the more recent systems I've built have stored the blobs
in PostgreSQL.
Regards, Dave