Folks, I am working towards producing a small pocket sized linux system based on ARM920T processor with onboard hardware speech synth. It will have both USB host and device and will be able to operate like a PDA when not connected to PC, will support both normal keyboard plus some simpler button/switch type interfaces, to allow audio PDA functions, read your notes, contacts, schedule, directions, etc. It will also operate as a speech synth when plugged into PC, and I have an ambitious goal of making it driverless. By correctly implementing the USB CDC (communications data class) serial interface, no drivers will be needed to access what will appear as a virtual serial port on both windows and linux. Further, I believe there is a way software-wise to make a USB storage class entity appear to the system as a CDROM and automatically do the AUTORUN.INF process, which could then launch a screen reader and other specialized audio based apps. This would allow plugging into any PC, windows or linux, and having a working speech synth, without installing anything. It will be some time before some of the software is worked out, but I am deep into the hardware now and should have prototypes soon. I am very interested in feedback, especially things like what kind of features would you like to have in both a speech synth and a personal audio based PDA type organizer. How can the human interface be more ergonomic if we ditch the keyboard paradigm for some controls, perhaps like a small remote control for controlling the speech playback and volume etc while roaming or even when connected to PC in some instances, also supporting full keyboard when necessary. My strategic goal is to produce these in low volume and sell them at very low price, much lower than anything similar. The goal is not to make money but to slowly build infrastructure and skills to continue working on other similar products, mostly geared towards portable mobile systems and with special emphasis on assistive interfaces of many kinds, not just speech but others also. I already have facility for CAD design of the hardware, production soldering, assembly and testing, and I'm starting on software now. When I get further along I may ask if anyone wants to participate in some beta testing by trying out the device and giving feedback. Also as I said I am very interested in feedback especially out of the box thinking: you do not need a PC to do these functions, old rules go away and new possibilities emerge like bluetooth, wifi, gsm/gprs, a specialized small keypad/remote to control the speech. There could be a mode where you plug into PC USB port and walk away with only a bluetooth headset and small keyboard, but still can read email and web within a certain amount of fixed restriction, or move to full keyboard for serious work. If this all sounds interesting to you and you have some ideas or think you'd like to try out early prototypes, please contact me offline. I will also surely be trying out the speakup, compiled into ARM kernel, with playback of that kernel console plus ability to act as a speech synth for pc running speakup. That work should be in process very soon. But I would love to hear any suggestions or ideas, especially in the area of usability, simplicity, ergonomics, how can a speech synth be more usable and more portable, how can it become much more assistive, if you have a processor running linux in your pocket? Please contact me off list doug at proficio.ca if you'd like to discuss this with me. I will likely be giving away a certain number of free prototypes eventually, but it may be some time before that happens. -- Doug