Hi, I changed the subject of the last thread because we have now straighed from the topic of web browsing. Ok it is true that making things accessible is a narrow line and should be done by experts for the most part. I do not think I would want a programmer who has never worked with or met a blind person making my screen reading software. On the other hand a programmer who does try to insure that a program is accessible from the get go is going to be a friend of mine. In some cases such has web browsing having built in accessible features is a good thing. Lets take internet explorer on windows. The screen reader provides access to this browser by hooking in to the page object model of the browser and reading the source code of the page and then taking control of the browser. This means you are no longer using the browser to brows the web but the screen reader to brows the web. The problem is when you have source code that causes the object model not to behave in a manor the screen reader expects the hole system crashes starting with the screen reader and the object model is a core part of windows and so when it crashes the hole system reboots and you have to wait tell your computer restarts and look at all the data you have lost. If on the other hand the screen reader provides the access you can avoid such problems. For example if you want to move to a heading on the page just pressing a key will bring focus to it. This will help every one not just blind users remember that most power users who are sighted do not use a mouse to access information. They use the keyboard just as we do. Another advantage is if this access was provided in the said web browser witch ever screen reader you use would not be an issue. The only issue is at this point how well does the screen reader do at reading the info on the screen. Witch is always a screen reader issue. Also programs that insure features are there that will allow screen review programs to access the program are for the most part beneficial as I explained in my last message on this subject. Now the last point about people who can not here. No extra access is needed do to the fact that the main means of reading a screen is with the eye not the eres. A person with one hand or no hands need an alternative input device. Programs that can interface with a said device would be programs that have a lot of keyboard commands that could be mapped to said device. So making it accessible on the one hand to blind and deaf blind users witch by the way I am almost to the second would benefit all users. Last but not least the spoken word is a natural means of interfacing with people. It is also going to be come the natural means of interfacing with the pc cell phone washer dryer and so on. This means that programs that already have speech in mind will be ahead when this inevitable transition accors. I now think I am beating a dead hoarse we all have our own ways of seeing things and that is why I am working hard at being a linux user and getting away from being a windows user so that I can make my own way of seeing things stick at least for my own clients. Hth