I'm simply asking: what is the better approach: a combined speech/braille server or speech and braille as separated systems. Or perhaps a third solution: a speech and a braille server who "know it" when there is a "brother" server running (and therefore could interact) --Hans