API stands for Application Programming Interface. Microsoft has one of these too, but ask yourself, how many programs talk right out of the box that weren't specifically designed for it? Zero, that's how many. Apple has gone one step farther than MS did, and actually integrated their speech api into the os directly, but still, if programs don't call the api, they won't talk. That's why I call it an API, and not a screen reader as such. If it were a real screen reader, it would read things regardless of how they were coded. Of course, it wouldn't do a very good job with some things, and a fairly decent one with others, but it would still work. With Apple, if you don't code for the api at all, then it won't talk at all. (as evidenced by the terminal window that doesn't talk at all, because Apple hasn't done anything with the non-graphical c9omponents of their os) Apple is calling it a screen reader, and to some degree it is, but it's a screen reader that won't make a single peep if the application doesn't do things properly. that's why I call it an API instead. Apple's marketing department doesn't agree with me, but I think it's a very important distinction to be made as speech api doesn't say the os talks, where as screen reader built-in sets a level of expectation that Apple and their vendors aren't ready to accomodate.