In answer to Brian's question, the issue arose in the context of a discussion concerning the general utility of a conventional "screen reader" approach, in which the braille/auditory interface is derived from monitoring the visual u i, by contrast with the benefits to be gained from enhancing the underlying software environment such that non-visual interfaces can take advantage of the semantic and structural distinctions reflected in the internal data representations maintained by applications. Emacspeak exemplifies, very successfully, the latter strategy. I argued that priority should be given to influencing the development of open-source software environments to facilitate a separation of application functionality from visual presentation in ways that permit braille and auditory interfaces to gain access, via generic means wherever possible, to the semantic content required to ensure a high quality of interaction. In reply, it was urged that good results could be achieved by traditional "screen reading" methods and the writing of scripts that would monitor the visual interface and react in specified ways to predefined patterns in the visual presentation. It became clear in the course of this discussion that some of the participants considered that importance should be attached to the development of a text-based screen reader for the Linux console which would employ scripts and macros in the manner suggested. By way of rejoinder, I reiterated my reservations concerning the limits of the "screen reader" concept and argued that, in any case, given that the X Window System supports both text-based and graphical applications, and recognising that UltraSonix employs essentially the design described above, consisting, as it does, in core screen reader functions with the details of the interface being controlled by scripts, there was no need for a separate screen reader to be developed. Rather, efforts along traditional screen reader lines should be concentrated on UltraSonix, given the limited development resources available in this field, as it would provide a solution in relation to both textual and graphical legacy applications, and could also be integrated into some of the newer approaches such as Gnome and Java accessibility. The current state of the discussion appears to be a mutual recognition of the limits of the "screen reading" paradigm and some continuing disagreement as to exactly where priorities should be allocated. I hope this is a reasonably fair summary.