Stut wrote:
Adam Zey wrote:
Tunelling arbitrary TCP packets. Similar idea to SSH port forwarding,
except tunneling over HTTP instead of SSH. A good example might be
encapsulating an IRC (or telnet, or pop3, or ssh, etc) connection
inside of an HTTP connection such that incomming IRC traffic goes
over a GET to the client, and outgoing IRC traffic goes over a POST
request.
So, the traffic is bounced:
[mIRC] ---> [client.php] -----internet-----> [apache --->
server.php] -----internet-----> [irc server]
And the same in reverse. The connection between client.php and
server.php is taking the IRC traffic and encapsulating it inside an
HTTP connection, where it is unpacked by server.php before being sent
on to the final destination. The idea is to get TCP tunneling
working, once you do that you can rely on other programs to use that
TCP tunnel for more complex things, like SOCKS.
You're trying to get a square peg through a round hole. The HTTP
protocol was not designed to do anything like this, so the standard
implementation by most web servers and PHP does not allow what you are
trying to do.
That's the fun of it, making things like PHP and HTTP do things they
weren't supposed to.
I'm curious about your 'lots of POSTs' solution. How are you keeping
the connection open on the server-side? It's certainly not possible to
maintain that connection between requests without using a process
outside the web server that maintains the connections. I've
implemented a system in the past to proxy IRC, MSN and AIM connections
in this way, but it only worked because the requests that came into
PHP got passed to this other process which held all the connections
and managed the traffic. And yes, it did generate a huge amount of
traffic even when it wasn't doing anything due to the need to poll the
server for new incoming messages.
With the lots-of-posts, the connection is a regular keepalive, which any
webserver happily keeps open. When this keepalive connection closes, you
open a new one. At least this way, while I still need to send lots of
posts (Say, one every 100ms, or 250ms, something like that), I can limit
the new connections to once every minute or two. While 4 messages per
second may seem like a lot, I would imagine that an application such as
Google Maps would generate a LOT more than that while a user is
scrolling around; google maps would have to load in dozens of images per
second as the user scrolled.
Polling for incomming messages isn't a problem, as there is no incomming
data for the POSTs. A seperate GET request handles incomming data, and I
can simply do something like select, or even something as mundane as
polling the socket myself. But I don't need to poll the server. And, the
4-per-second POST transactions don't need to be sent unless there is
actually data to be sent. As long as a keepalive request is sent to make
sure the remote server doesn't sever connection (My tests show apache 2
with a 15 second timeout on a keepalive connection), there doesn't need
to be any POSTs unless there is data waiting to be sent.
Of course, this solution has high latency (up to 250ms delay), and
generates a fair number of POST requests, so it still isn't ideal. But
it should work, since it doesn't do anything out-of-spec as far as HTTP
is concerned.
This demonstrates a point at which you need to reconsider whether a
shared hosting environment (which I assume you're using given the
restrictions you've mentioned) is enough for your purposes. If you had
a dedicated server you could add another IP and run a custom server on
it that would be capable of doing exactly what you want. In fact there
are lots of nice free proxies that will happily sit on port 80.
However, it's worth nothing that a lot of firewalls block traffic that
doesn't look like HTTP, in which case you'll need to use SSL on port
443 to get past those checks.
I wasn't targetting shared hosting environments. I imagine most of them
use safe mode anyhow. I was thinking more along the lines of somebody
with a dedicated server, or perhaps just a linux box in their closet.
The thing is, I'm not writing a web proxy. I'm writing a tunneling
solution. And, the idea is that firewalls won't block the traffic,
because it doesn't just look like HTTP traffic, it really IS HTTP
traffic. Is a firewall really going to block a download because the data
being downloaded doesn't "look" legitimate? As far as the firewall is
concerned, it just sees regular HTTP traffic. And of course, a bit of
obuscation of the data being sent wouldn't be too hard. The idea here is
that no matter what sort of proxy or firewall the user is behind, they
will be able to get a TCP/IP connection for any protocol out to the
outside world. Even if the user is sitting on a LAN with no gateway, no
connection to the internet except a single proxy server, they should
still be able to make a TCP/IP connection by tunneling it through that
proxy server.
Anyways, long story (sorry) short, your square peg won't go in the
round hole without serious modification. Hope that helps.
Sadly, you seem to be right, as far as my original goal of a
never-ending (or rather, extremely long) POST request for sending data
is concerned. Everybody seems to be saying the same thing on that topic.
Which means I'll probably have to resort to my less efficient backup
solution.
Regards, Adam Zey.
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php