The demand for captioning has grown exponentially with the rise of global media consumption and stricter accessibility regulations, such as the European Accessibility Act (EAA), which will come into force in 2025. While this is of great benefit to around 10% of Europeans, the rules for providing accessibility vary from service to service.
For public broadcasters, the regulatory requirements are very high compared to private broadcasters, according to an EU-wide study. This study also quotes an estimate of the cost of providing accessibility: “The provision of broadcasting accessibility services results in significant additional costs for broadcasters, ranging from 1% to 30% of production costs”.
Over the past few years, we have done several live subtitling projects with private broadcasters, where the regulatory requirements may be less demanding, but where there has been a lot of progress over the past few years.
Traditional manual subtitling has its place, but it remains labour-intensive and costly, especially when it comes to subtitling live broadcasts. There is still a long way to go before we reach the point of being able to deliver closed captions (with sound effects ) via AI, but a 24/7 live news channel will not be able to offer manual live subtitling, and this is where we come in:
AI live captioning is an efficient, scalable solution that can dramatically improve the availability of captions for live and on-demand content. With our dedicated architecture, we are able to provide 24/7 AI subtitling, combining the transcription with powerful dictionaries and delivering it via linear teletext.
The benefits of the aiconix AI subtitling via Teletext
-
Improved accessibility:
The upcoming EAA regulations will make it mandatory to provide subtitles for audiovisual content. Aiconix's AI captioning technology helps broadcasters and streaming services meet these requirements, ensuring that content is accessible to people with hearing impairments, as well as a wider global audience. Learn more about accessibility in media.
-
Speed and scalability:
Unlike manual captioning, which takes time to complete, AI-powered live captioning can generate captions in real time, day and night. This is particularly beneficial for live broadcasts such as news, sports and events, where immediate accessibility is critical.
-
Cost-effectiveness:
The cost of manual subtitling can be prohibitive for many media providers, especially for live regional content. However, AI significantly reduces these costs, enabling implementation at a fraction of the cost.
-
Broadcast grade:
Designed for broadcast workflows, our system not only provides you with an architecture that enables 24/7 transcription, but also with automated recovery options in the event of a failure.
-
Integrated workflow:
Subtitles are delivered directly to teletext via the inserter, providing viewers with the same proven experience, while AI transcription is seamlessly integrated behind the scenes.
Deep Live Hub: Leveraging AI for Live Subtitling via Teletext
The DeepVA platform and its Deep Live Hub enable real-time transcription of spoken content into text, providing the basis for instant subtitle generation. The platform’s AI models are trained to recognize speech in multiple languages, making it highly effective for live translation and transcription across multiple media formats.
Technically, DeepVA’s AI models process the live audio stream and generate subtitles that can be delivered via HTTP Live Streaming (HLS) and via CC, BurnedIn and AMT (our JSON based format). For integration into traditional broadcast systems, Aiconix works with SDI insertion technologies from partners like Peter Albrecht Elektronik. The Albrecht Inserter facilitates the embedding of these AI-generated subtitles into an SDI signal using teletext standards.
This system works seamlessly to integrate subtitles into both HD and SD signals, allowing broadcasters to serve audiences on a variety of platforms. Using this workflow, we are able to subtitle one of Germany’s leading 24/7 live news programmes, even at 2am, whilst keeping the costs low.
DeepVA and Deep Live Hub: Leveraging AI for live subtitling
The future of audiovisual media lies in the ability to equally reach approximately 10% of European audiences with disabilities. Offering your programs to all possible viewers isn’t just a matter of compliance; it’s about fostering inclusivity and sustainability by ensuring no group is excluded. Accessibility also enhances staff motivation, knowing their work is accessible to all. And there are small victories too: Imagine being on the subway, where everyone can enjoy content without needing headphones and annoying those around them, thanks to AI-generated subtitles.
But subtitling is just the start. With DeepVA as Composite AI platform, we can simplify dialogue for broader comprehension, or, through start-ups we bring to our platform, we offer automated dubbing in multiple languages or automatically improved dialogues for more accessibility. This makes content more accessible while also unlocking new markets by supporting multilingual subtitles, catering to both global audiences and regional minorities.
As AI services rapidly improve in quality and functionality, tools like Deep Live Hub not only solve today’s subtitling challenges but also future-proof your content for accessibility laws like the European Accessibility Act, ensuring you’re ready for the demands of tomorrow.
AI-powered live captioning is not only cost-effective but also essential for meeting upcoming accessibility regulations like the European Accessibility Act, ensuring compliance while enhancing audience reach.