Re: Combined or separate speech/braille solution?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> The interaction would best take place at the level of the access software
> so that the braille and speech devices can be configured to perform
> complementary functions, in ways that would require exploration and
> experimentation.

I encapsulated the concept of multiple rendering paths of the same data
in the term "polymedia" some years ago.  The notion is that content
(defined for presentation in structural tags) is parsed by presentation
software and rendered in as many forms as available output devices allow,
or any subset of that capability that the user has chosen to use.  (Artistic
control can be accommodated, but the topic is beyond what we need to
consider here.)  Polymedia differs from multimedia in that the user can
reasonably expect to have access to data in all the forms that are most
useful, rather than in a single controlled access path determined by the
information publisher.  And a user might use different forms at different
times depending on the output devices available or the tasks that the
user is performing.  (Access while driving a car might very well require
a different output medium from access while sitting at a desk, for example.)


[Index of Archives]     [Linux for the Blind]     [Fedora]     [Kernel List]     [Red Hat Install]     [Red Hat Watch List]     [Red Hat Development]     [Gimp]     [Yosemite News]     [Big List of Linux Books]