RVK wrote:
I don't think buffer overflow has anything to do with transparent proxy. Transparent proxying is just doing some protocol filtering.
A transparent proxy is a protocol filter, which is why it is an ideal way of detecting protocol-dependent buffer overflow attacks. The detection code have to be built into the proxy, of course. Examples: A web proxy can check for anomalous long "get" request, there have been web servers with buffer overflows when the URL was too long. The proxy can terminate such connections, protecting the possibly vulnerable webserver. An ftp proxy can check for (and remove) anomalous long filenames, as well as funnies like "ls */*/*/*/*/*" Similiar for many other services. The proxy approach is useful because knowledge of the protocol is necessary. After all, it is ok to up/download a huge file via ftp, while a 2M filename is suspicious. Size alone is not enough.
Still the proxy code may have some buffer overflows.
A proxy (or any other attempt at a firewall) may have its own holes of course, but avoiding making them isn't that hard.
The best way is first to try avoiding any buffer overflows and take programming precautions.
Of course, if you have the source and that source isn't an unmaintainable mess. One or both of those conditions may fail, and then the IDS becomes useful.
Other way is to chroot the services, if running it on a firewall.
Provided it is an unixish server . . .
There are various mechanisms which can be used like bounding the memory region it self. Stack Randomisation and Canary based approaches can also avoid any buffer overflow attacks.
These may or may not be available. You can always stick a proxy firewall in front of the server though, no matter what os and server apps it runs. Helge Hafting - Linux-crypto: cryptography in and on the Linux system Archive: http://mail.nl.linux.org/linux-crypto/