On Mar 28, 2010, at 2:45 PM, Nathan Rixham <nrixham@xxxxxxxxx> wrote:
Adam Richardson wrote:
"Threading" is only realistically needed when you have to get data
from
multiple sources; you may as well get it all in parallel rather than
sequentially to limit the amount of time your application / script is
sitting stale and not doing any processing.
In the CLI you can leverage forking to the process to cover this.
When working in the http layer / through a web server you can
leverage
http itself by giving each query its own url and sending out every
request in a single http session; allowing the web server to do the
heavy lifting and multi-threading; then you get all responses back in
the order you requested.
Regarding leveraging http to achieve multi-threading-like
capabilities, I've
tried this using my own framework (each individual dynamic region
of a page
is automatically available as REST-ful call to the same page to
facilitate
ajax capabilities, and I tried using curl to parallel process each
of the
regions to see if the pseudo threading would by an advantage.)
In my tests, the overhead of the additional http requests killed any
advantage that might have been gained by generating the dynamic
regions in a
parallel fashion. Do you know of any examples where this actually
improved
performance? If so, I'd like to see them so I could experiment
more with
the ideas.
Hi Adam,
Good question, and you picked up on something I negated to mention.
With HTTP/1.1 came a little used addition which allows you to send
multiple requests through a single connection - which means you can
load
up multiple requests and receive all the responses in sequence through
in a single "call".
Thus rather than the usual chain of:
open connection
send request
receive response
close connection
repeat
you can actually do:
open connection
send requests 1-10
receive responses 1-10
close connection
The caveat is one connection per server; but it's also interesting to
note that due to the "host" header you can call different "sites" on
the
same physical machine.
I do have "an old class" which covers this; and I'll send you it
off-list so you can have a play.
In the context of this; it is also well worth noting some additional
bonuses.
By factoring each data providing source (which could even be a single
sql query) in to scripts of their own, with their own URIs - it allows
you to implement static caching of results via the web server on a
case
by case basis.
A simple example I often used to use would be as follows:
uri: http://example.com/get-comments?post=123
source:
<?php
// if the static comments query results cache needs updated
// or doesn't exist then generate it
if(
file_exists('/query-results/update-comments-123')
|| !file_exists('/query-results/comments-123')
) {
if( $results = $db->query($query) ) {
// only save the results if they are good
file_put_contents(
'/query-results/comments-123',
json_encode($results)
);
}
}
echo file_get_contents('/query-results/comments-123');
exit();
?>
I say "used to" because I've since adopted a more restful & lighter
way
of doing things;
uri: http://example.com/article/123/comments
and my webserver simply returns the static file using os file cache
and
its own cache to keep it nice and speedy.
On the generation side; everytime a comment is posted the script which
saves the comment simply regenerates the said file containing the
static
query results.
For anybody wondering why.. I'll let ab to the talking:
Server Software: Apache/2.2
Server Hostname: 10.12.153.70
Server Port: 80
Document Path: /users/NMR/70583/forum_post
Document Length: 10828 bytes
Concurrency Level: 250
Time taken for tests: 1.432020 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 110924352 bytes
HTML transferred: 108323312 bytes
Requests per second: 6983.14 [#/sec] (mean)
Time per request: 35.800 [ms] (mean)
Time per request: 0.143 [ms] (mean, across all concurrent
requests)
Transfer rate: 75644.20 [Kbytes/sec] received
Yes, that's 6983 requests per second completed on a bog standard lamp
box; one dual-core and 2gb ram.
reason enough?
Regards!
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
I am interested in how you are handling security in this process. How
are you managing sessions with the restful interface? This is the one
thing that really interests me with the whole restful approach.
Bastien
Sent from my iPod
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php