Well, I am not sure how it worked, but I once tried an X server for windows which was able to figure out the controls under Linux and my windows screen reader was able to read them after a fashion, so I wonder if there is some window information passed to the Xserver after all. on Wednesday 07/27/2005 Kelly Prescott(prescott at deltav.org) wrote > hmm, a interesting concept... > The problem is that by the time the x server sees most of the stuff, it is > just screen position renderings. The server does not have a concept of > letters, characters, etc. > The server knows where you click on a screen, for example, but it just > sends the information to the under lying application which is responsible > for deciding if you have clicked on a button etc. > This is a over simplified explaination, but for our purposes, it will > do... > Bottom line is that what ever toolbox, library, wigit set, rendering app, > or what ever, it must feed the textual information to some interface for > the screen reader to get at it so it can be read. > Hope this helps. > kp > > > > On Tue, 26 Jul 2005, Lorenzo Taylor wrote: > > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > Here's another idea, maybe no one has thought of it yet, or maybe it is > > impossible to implement, but here it goes. > > > > It seems that the existing approaches for X screen readers should be taking a > > look at Speakup as a model. Gnopernicus, for example, is using libraries that > > rely on certain information ent by the underlying application libraries. > > Unfortunately, this implementation causes only some apps to speak while others > > which use the same widgets but whose libraries don't send messages to the > > accessibility system will not speak. But it occurs to me that X is simply a > > protocol by which client applications send messages to a server which renders > > the proper text, windows, buttons and other widgets on the screen. I believe > > that a screen reader that is an extension to the X server itself, (like Speakup > > is a set of patches to the kernel) would be a far better solution, as it could > > capture everything sent to the server and correctly translate it into humanly > > understandable speech output without relying on "accessibility messages" being > > sent from the client apps. > > > > Any thoughts on this would be welcome. > > > > Lorenzo > > - -- > > - -----BEGIN GEEK CODE BLOCK----- > > Version: 3.12 > > GCS d- s:+ a- C+++ UL++++ P+ L+++ E- W++ N o K- w--- > > O M V- PS+++ PE Y+ PGP++ t++ 5+ X+ R tv-- b++ DI-- D+ > > G e* h---- r+++ y+++ > > - ------END GEEK CODE BLOCK------ > > -----BEGIN PGP SIGNATURE----- > > Version: GnuPG v1.4.1 (GNU/Linux) > > > > iD8DBQFC5wJhG9IpekrhBfIRAuhgAKDNMp7ThoUKPYqiWC+u8WB3RS0oKQCgulck > > 2KEeJCAheJfd5oqbbUgiM5k= > > =lUXl > > -----END PGP SIGNATURE----- > > > > _______________________________________________ > > Speakup mailing list > > Speakup at braille.uwo.ca > > http://speech.braille.uwo.ca/mailman/listinfo/speakup > > > > _______________________________________________ > Speakup mailing list > Speakup at braille.uwo.ca > http://speech.braille.uwo.ca/mailman/listinfo/speakup -- Your life is like a penny. You're going to lose it. The question is: How do you spend it? John Covici covici at ccs.covici.com