Hi folks, I had a couple of observations that may not sit well with most of you ... Hardware synthesis is becoming obsolete. Why? More and more systems, especially laptops, are being manufactured without RS232 ports. When I buy my next laptop, I won't let the presence of RS232 be a determining factor. The vendors of USB synths won't release their product information, so these are unsupported. Thus, I'm not buying one. Who wants to do business with people like that anyhow? So it looks like software speech is the way of the future, at least for me. Next, software speech is more convenient, especially when using a laptop. You have to carry one less peripheral with you. The question to ask is this. Given the decline of hardware synthesis, is it really necessary to have speech support within the kernel itself? Software synthesizers run in user mode, so the benefits of a speech-enabled kernel -- notably a talking boot process -- are lost. Comments are welcome. PS. I'm not a GUI user, so I'm arguing from a console / command-line perspective. -- Chris