I'm not qualified to comment on the technical merits of a kernel vs. a user-space solution, but I know that from a maintenance point of view we would prefer it. More importantly, this is the kind of forward thinking I would like to see more of in the access community. Over the next couple of years we will increasingly move over to ultra-mobile technologies. These will require lean kernels but there is scope for many options in user-space. Ubuntu is working actively with Intel on these new platform to make sure accessibility is a consideration from the very start. Hopefully we can avoid the long accessibility gap we had with mobile phones. Henrik C.M. Brannon wrote: > Hi folks, > I had a couple of observations that may not sit well with most of you ... > > Hardware synthesis is becoming obsolete. Why? More and more systems, > especially laptops, are being manufactured without RS232 ports. When > I buy my next laptop, I won't let the presence of RS232 be a > determining factor. The vendors of USB synths won't release their > product information, so these are unsupported. Thus, I'm not buying > one. Who wants to do business with people like that anyhow? So it > looks like software speech is the way of the future, at least for me. > Next, software speech is more convenient, especially when using a > laptop. You have to carry one less peripheral with you. > > The question to ask is this. Given the decline of hardware synthesis, > is it really necessary to have speech support within the kernel > itself? Software synthesizers run in user mode, so the benefits of a > speech-enabled kernel -- notably a talking boot process -- are lost. > > Comments are welcome. > > PS. I'm not a GUI user, so I'm arguing from a console / command-line > perspective. > > -- Chris > > > _______________________________________________ > Speakup mailing list > Speakup at braille.uwo.ca > http://speech.braille.uwo.ca/mailman/listinfo/speakup >