Re: speech standard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15 Mar, Nicolas Pitre wrote:
> On Sun, 14 Mar 1999, Jim Rebman wrote:
> 
>> >>    Would it make any sense to have a unified braille/sound speech
>> >>    interface (maybe at a higher level)?
>> >
>> >Not really.  Speech and Braille are completely different concepts, even if
>> >their goal is to provide information from the running application.
>> >Braille is static and dimentionnal as speech is volatile.  There is no way
>> >to merge them without loosing on one side.
>> 
>> Ok, but don't tell this to the people at Henter-Joyce -- they seem to have
>> done a reasonable job of it.  Nobody said that the representations of each
>> modality have to be identical in order for each to be effective.
> 
> I actually wrote drivers for Henter-Joyce's JFW and the driver API for
> braille and speech are completely separate.  The point here is not to try
> to merge braille and speech API in order to obtain a uniform device
> access.  A screen reader may well make use of both braille and speech and
> do a good job with both of them, but driver definitions for braille and
> speech have to remain separate.

What I was trying to say and am trying to argue about is. Isn't it
possible to have something like multiple layers of APIs that get more
and more specific?

Something similar to the GGI approach for graphics in the kernel.

Highest abstraction level: Text device or library or demon. You input
	some text and it is spoken/written on the screen/output on
	the braille line. You have some minimal options to manipulate
	the text that would f.ex. be scrolling, clearing the screen and
	such.

Mid-Abstraction Level: You can access text --or-- graphics --or--
	braille --or-- speech output. But you can not set hardware
	specific attributes like let's say the pitch of the voice.

Low level: You can do whatever you like, that is access all the
	features of your braille or synthesizer or whatever.

Now if you are writing an application you can decide for yourself how
comfortable you want to have it, that is at which level you want to
interface with the output system, that is which features you want to
use or need.

Examples: all the traditional unix tools can be easily piped into a
device of the highest abstraction level and you can be pretty sure you
get a sensible output on whatever device you're really using. Emacspeak
as a counterexample would maybe access the lowest level, because you
want to be able to use different voice qualities for different types of
information.

Am I thinking wrong?
*
t

-- 
------------------------------------------------------------------------------
             Tomas Pospisek - Freelance: Linuxing, Networking
                      http://spin.ch/~tpo/freelance
         www.SPIN.ch - Internet Services in Graubuenden/Switzerland
------------------------------------------------------------------------------



[Index of Archives]     [Linux for the Blind]     [Fedora]     [Kernel List]     [Red Hat Install]     [Red Hat Watch List]     [Red Hat Development]     [Gimp]     [Yosemite News]     [Big List of Linux Books]