AT&T officially launched their Watson Speech API today featuring seven different speech recognition and transcription capabilities.
AT&T Watson is programmed to "learn" different accents, speaker variations, background environments, platform variations, dialects and speech patterns-and, thus, continually improve accuracy over time. It's a technology that's been a long time in development and more than 600 patents in the making, and we're excited to open it up to developers and see what they make of it.
Developers can build apps that use the Speech API in seven different contexts including:
Web Search Search from within an app with the power of your voice. Web Search is trained to recognize several million mobile queries.
Business Search Trained on tens of millions of local business entries, this context lets you transcribe your search query to let you find what's in the area, from donuts to doctors.
Voicemail to Text No need to scribble down a message, this context-trained on a massive set of data acquired from call centers, mobile applications and the web-turns your voicemail into sharable text.
SMS Tuned to transcribe text, this context can deliver your spoken message as text through a messaging service of your choice.
Question and Answer Trained on over 10 million questions, this context accurately transcribes your question and returns the correct answer.
TV Searching for show titles, movies and actors? This context transcribes your spoken search queries to enable you to search the AT&T U-verse program guide.
Generic - Automatically recognizes and transcribes English and Spanish languages. This context can also be used for general dictation, communication and search.
Visit developer.att.com to access the API Platform and learn more about the SDKs.
AT&T Watson is programmed to "learn" different accents, speaker variations, background environments, platform variations, dialects and speech patterns-and, thus, continually improve accuracy over time. It's a technology that's been a long time in development and more than 600 patents in the making, and we're excited to open it up to developers and see what they make of it.
Developers can build apps that use the Speech API in seven different contexts including:
Web Search Search from within an app with the power of your voice. Web Search is trained to recognize several million mobile queries.
Business Search Trained on tens of millions of local business entries, this context lets you transcribe your search query to let you find what's in the area, from donuts to doctors.
Voicemail to Text No need to scribble down a message, this context-trained on a massive set of data acquired from call centers, mobile applications and the web-turns your voicemail into sharable text.
SMS Tuned to transcribe text, this context can deliver your spoken message as text through a messaging service of your choice.
Question and Answer Trained on over 10 million questions, this context accurately transcribes your question and returns the correct answer.
TV Searching for show titles, movies and actors? This context transcribes your spoken search queries to enable you to search the AT&T U-verse program guide.
Generic - Automatically recognizes and transcribes English and Spanish languages. This context can also be used for general dictation, communication and search.
Visit developer.att.com to access the API Platform and learn more about the SDKs.