Re: Allowing multiple, simultaneous, non-blocking queries.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Richard Quadling wrote:
> Hi.
> 
> As I understand things, one of the main issues in the "When will PHP
> grow up" thread was the ability to issue multiple queries in parallel
> via some sort of threading mechanism.
> 
> Due to the complete overhaul required of the core and extensions to
> support userland threading, the general consensus was a big fat "No!".
> 
> 
> As I understand things, it is possible, in userland, to use multiple,
> non-blocking sockets for file I/O (something I don't seem to be able
> to achieve on Windows http://bugs.php.net/bug.php?id=47918).
> 
> Can this process be "leveraged" to allow for non-blocking queries?
> 
> Being able to throw out multiple non-blocking queries would allow for
> the "queries in parallel" issue.
> 
> My understanding is that at the base level, all queries are running on
> a socket in some way, so isn't this facility nearly already there in
> some way?

Yes.

"Threading" is only realistically needed when you have to get data from
multiple sources; you may as well get it all in parallel rather than
sequentially to limit the amount of time your application / script is
sitting stale and not doing any processing.

In the CLI you can leverage forking to the process to cover this.

When working in the http layer / through a web server you can leverage
http itself by giving each query its own url and sending out every
request in a single http session; allowing the web server to do the
heavy lifting and multi-threading; then you get all responses back in
the order you requested.

In both environments you can use non-blocking sockets to do your
communications with other services and 3rd parties; whilst you can only
process the the returned data sequentially, at least all the foreign
services are doing their work at the same time. Which cuts down user
perceived runtime and the "real" time (since your own php code can
ultimately only run X fast).

A short example would be to consider using the non blocking mysql query
function against multiple connections; this way mysql is doing the heavy
lifting in parallel and you are processing results sequentially.

In all scenarios /all/ of the contributing aspects have to be considered
though; the number of open connections, how much extra weight that puts
on the server (having a knock on effect on other processes), what
happens when one of the "threads" fails and so forth.

Normally there are many different ways to handle the same problem
though; such as views at the rdbms level, publishing / caching output,
or considering if you are still in the right language - sometimes
factoring bits which require multi threading in to different languages
and services lends to a nicer solution.

And finally, more often than not, the same problem can be addressed by
taking the final output, then working out how to produce it in reverse;
many queries can be turned in to one, data can be normalised higher up
the chain, sorting can occur in php rather than in the rdbms and many
more solutions. Always many ways to skin the cat :)

Regards!

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


[Index of Archives]     [PHP Home]     [Apache Users]     [PHP on Windows]     [Kernel Newbies]     [PHP Install]     [PHP Classes]     [Pear]     [Postgresql]     [Postgresql PHP]     [PHP on Windows]     [PHP Database Programming]     [PHP SOAP]

  Powered by Linux