Verbit, one of the notable small enterprises in closed captioning, is based in NYC and Tel Aviv. Its customized approach to closed captioning metadata handling ensures complete accuracy for clients by incorporating human teaching for machine learning. The $1.5 million-backed startup was originally intended for use in legal settings, but their focus quickly shifted to being a presence of improved accessibility in academia. Their customers, including Stanford University, give resounding praise - which is not lost on them. They stated in a recent blog post that their closed captioning system could eventually outperform that of Google. Their "three loop" approach involves using their patent-pending speech recognition technology, then manually reviewing and editing, then using A.I. to train from the data and do a final review. This is a very intriguing snapshot of how machine learning and manual training meld together for a bulletproof caption stream, and raises the question of how it can be even more streamlined for live broadcasting.
Speech-to-text closed captioning analytics is making people’s lives better. It has numerous benefits such as allowing greater accessibility and better comprehension for viewers. By using analytics, speech-to-text closed captioning would be more accessible to the masses who could then reap the benefits of this service.
View Patent Forecast®
Top Corporations
News and Insights
Data Visualization
Speech-to-text closed captioning analytics is making people’s lives better. It has numerous benefits such as allowing greater accessibility and better comprehension for viewers. By using analytics, speech-to-text closed captioning would be more accessible to the masses who could then reap the benefits of this service.