No matter which solution you choose, the real problem is to detect that the server fails.
If the server stops responding to requests, that's easy enough. However if there is not a clear-cut failure, e.g. one server gradually slows down, or still responds to the polls from the load balancer but not to certain requests from clients the judgement call is harder...
Apart from clustering solutions and HW load balancers, you could also add Apache 2.2 and mod_proxy_balancer to the list.
Lot of suggestions are possible, but they all inflict pain.
A very clean and widspread solution is to have two identical webservers (use a deployment script to keep the servers absolutely in sync) and use a hardware loadbalancer in front of the two servers. This is likely to cause financial pain.
An alternative is to setup Linux HA-cluster. There are lot of how-tos around. Still the learning curve is so steep, it means pain. Not theoretically, but in practice, high-availability is only the goal. At first, it just hurts.
You could also try to go with the most simple solution and make your server more stable. If you know you can cut your downtime by 50% by investing a week of work in the server, then this is probably worth it. Unless you need to cut it by 95% or 99%. But this would mean a lot of pain anyways.