United States telecommunications giant AT&T will sign an agreement today to license a speech synthesis technology developed by a laboratory in Japan. The agreement, for an undisclosed sum, will allow AT&T to include the speech technology in its Watson ASAP (Advanced Speech Applications Platform) system - a software platform unveiled as a commercial product last year. Watson ASAP, which allows people to control computers with voice commands, was named after Thomas Watson, Alexander Graham Bell's assistant. The technology was developed by the Advanced Telecommunications Research Institute (ATR) in Kyoto, which has been working on it for several years. ATR supervisor Nick Campbell headed the research. Mr Campbell has been working on synthesising Japanese, English, Korean and German. He has also done work with British and US dialects as well as Japanese dialects in Tokyo, Osaka and Kumamoto. He is about to start work on Chinese with the help of Professor Chorkin Chan of the University of Hong Kong's computer science department. Mr Campbell's technique involves creating a digitised database of sounds and indexing them. An example is to take a recording of the sentence 'nine children in a car' to extract the word 'China'. The 'ch' would come from 'children'; the 'in' would come from 'nine' and the final syllable would be 'a'. The three sounds would make up the word 'China'. Unlike previous attempts at speech synthesis, the results are realistic. Most of the work Mr Campbell has done so far has been with Japanese television personality and writer Kuroyanagi Tetsuko. She made a cassette tape of one of her books which the ATR laboratory put on computer. The sounds were indexed and her voice was used to create not only sentences she had never spoken, but ones in English, Korean and German. The results of this and other experiments can be heard on the ATR Web page at http://www.itl.atr.co.jp/chatr . Mr Campbell is certain he can manipulate almost any sound, given enough data. 'If you give me an hour's worth of speech recording, I believe it will contain enough variation for me to create almost any sentence.' Up to now, most attempts at making computers sound like human beings have tried to rely on signal processing, manipulating signals so that they sound like human speech. Apple Computer, for example, developed a system that could read English texts. It was understandable but still sounded like a machine. The licensing agreement with AT&T was just the kind of recognition the team was looking for, Mr Campbell said. Announcement of the agreement follows a meeting at the University of Hong Kong last week of the speech synthesis standards body Cocosda (International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques for Speech Input/Output). Cocosda, formed in 1990, split into two branches last year due to the differences between European and Asian languages. The Asian branch deals mainly with Chinese, Japanese and Korean.