Re: Re: how to display images stored in DB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> >> The web browser sees an image as a single HTTP request. Invoking the PHP
> >> script engine, parsing the script, and executing a SQL query to retrieve
> >> the image from the database is less efficient than letting the web
> >> server
> >> just send the file.


In a simple setup, that is probably true. However, if you use PHP to
do authentication or throttling, then the engine is already there. On
the flip side, you can use sendfile() or on Lighhttpd you can push the
sending of the file back to the webserver using x-sendfile.


> >> Image files do not need to be constrained by the rigid requirements of a
> >> relational database.


File systems are not immune to constraints. For example, ext3 only
allows 32000 subdirectories. So if you gave each user a directory to
upload files to, you would be stuck at a max of 32000 users. Or start
going to silly things like /S/t/e/Steve.gif

More constaints below..


> > What about when you need to share those files across a 50 node network?


Webfarm scenarios do come to mind. There is an issue of how to sync
all webservers to have all files. Then again, if you are using 50
webservers, the chances of them all being able to house all your files
(1 petabyte, as an example given) is not very good.


some databases support raw access in which case they're performance is
probably just as good as the OS (and quite probably better if you want
to store meta information about the file and it's data).


While the database method makes for a good way of doing offsite
backups (replicating out to a slave), the database can easily become a
choke point as well.


> > I'd keep it in a database, then when I need it cache a local copy on the
> > filesystem. Then I can just check the timestamp in the database to see
> > if the file has changed. Voila, multi-node high availability images.


You will need to keep information about what you are caching so that
you can prune. Best choice I guess would be to keep a local sqlite db
on the webserver to keep track. However, you had better understand
your filesystem. In ext3, for example, if you have a lot of files in a
folder you will likely use the dir_index option when creating the
partition. But realize that deleting files does not delete leaf nodes
of the btree, which can have all sorts of performance and disk usage
effects that are non-obvious.

There are hybid models, of course. mogilefs for example uses mysql to
store data file info, but not the file itself. Instead a series of
programs are used to spread the files across a farm of servers, using
mutiple replicas for fault tolerance and performance reasons. Those
files are stored using a combination of directory mazes and hashes to
avoid typical filesystem issues.

So there are lots of ways to deal with the issue. Depends on your
constraints on time, complexity, and scalability. After all, if you
only have 10,000 users, who cares! 100 million might be different.

-s

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


[Index of Archives]     [PHP Home]     [Apache Users]     [PHP on Windows]     [Kernel Newbies]     [PHP Install]     [PHP Classes]     [Pear]     [Postgresql]     [Postgresql PHP]     [PHP on Windows]     [PHP Database Programming]     [PHP SOAP]

  Powered by Linux