Causa AI Act — Update 2026

The AI Act, the law regulating artificial intel­li­gence in the European Union, is currently in a crucial imple­men­tation phase. Some important provi­sions were postponed last month. So it’s time to explain our current status.

The European Commission recently proposed postponing the intro­duction of certain regula­tions for so-called “high-risk AI.” This is intended to allow more time for the devel­opment of technical standards and the associated compliance measures. This delay partic­u­larly affects the stricter require­ments for AI systems that are considered partic­u­larly risky, such as those used in healthcare, law enforcement, and the justice system. This gives companies and public author­ities more time to adapt to the new require­ments and makes the legis­lation easier to implement in practice.

Government and advisors are currently working inten­sively on fine-tuning the imple­men­tation of the AI Act. The biggest challenges lie in clari­fying the respon­si­bil­ities of the many national author­ities and practical appli­cation.

As long as these uncer­tainties remain, this naturally poses risks for the economy and businesses and tends to encourage a wait-and-see attitude. The delay in these provi­sions highlights the complexity of imple­menting the AI Act and under­scores the need to maintain a balance between security and necessary innovation. With our numerous European customers in the media industry and public admin­is­tration, there is no question that we will proac­tively engage with the AI Act, even if legal issues remain unresolved. Therefore, we provide a brief overview here:

DeepVA AI platform: Flexible, multi­modal, and cloud-independent

With our DeepVA AI platform, we develop and operate AI solutions that are used produc­tively by media companies, archives, public insti­tu­tions, and other organi­za­tions in Europe. At the heart of this is our composite AI approach with a central API layer. This allows our customers to integrate multi­modal AI models, propri­etary DeepVA models, customer-specific models, open-source models, and third-party models into their workflows. Another key feature is that our platform can be operated indepen­dently of clouds in our own secure infra­structure.

AI Act compliance, infor­mation security, and data protection

The EU AI Act, which was published in the Official Journal in the summer of 2024 and has been in force since August 2024, will unfold its substantive oblig­a­tions in several stages until 2027, partic­u­larly for high-risk systems. We began early on to system­at­i­cally align our organi­zation, platform archi­tecture, and devel­opment processes with the require­ments of the AI Act, and we view compliance as an essential part of our product promise. And this doesn’t just start with the AI Act, but even earlier, with infor­mation security: We already have an active infor­mation security management system in place that is based on the principles of ISO/IEC 27001:2022. Among other things, this ISMS includes defined roles and respon­si­bil­ities, documented guide­lines on infor­mation security and data protection, struc­tured risk and action management, and defined processes for change and release management as well as for the management of suppliers and subcon­tractors. The controls anchored in our ISMS are contin­u­ously reviewed for effec­tiveness and integrated with the technical capabil­ities of our platform.
In terms of data protection, we work entirely on the basis of the General Data Protection Regulation and the relevant national regula­tions. From the outset, DeepVA has been designed to make data processing as data-efficient, trans­parent, and control­lable as possible. Our platform runs on European infra­structure and supports compre­hensive on-premises and offline operating scenarios, enabling sensitive workloads—such as sensitive transcrip­tions, trans­la­tions, or facial recognition—to be processed entirely within the EU and, if desired, exclu­sively within the customer’s area of respon­si­bility. Our customers can also rest assured when it comes to data sover­eignty: their customer data will not be used for downstream, propri­etary model or product devel­opment; data sover­eignty remains entirely with our customers.

Trans­parency, control, and future-proof AI solutions

At the same time, we have placed the key require­ments of the AI Act for AI systems at the center of our product devel­opment. This is the basis for our work: By using our platform, we minimize the concerns and effort associated with AI Act regula­tions and general data protection issues.
Among the require­ments, the struc­tured procedure for risk classi­fi­cation of AI function­al­ities based on the risk levels estab­lished in the AI Act is partic­u­larly important. Equally important are a systematic approach to data and model gover­nance, including data quality require­ments, the creation and mainte­nance of technical documen­tation for our systems, and logging concepts that support trace­ability, audits, and monitoring during ongoing opera­tions.

Where our technology is used in areas of appli­cation that may be considered risky under the AI Act, we advise and support our customers in correctly classi­fying and imple­menting the relevant oblig­a­tions – for example, with regard to conformity assess­ments, trans­parency oblig­a­tions, or human oversight.
Trans­parency and control for our customers are a central element of our compliance approach. DeepVA offers config­urable workflows, traceable results, and the ability to train and operate customer-specific models without becoming overly dependent on individual cloud providers or propri­etary ecosystems. In this way, we support both the AI Act’s goals of strength­ening trans­parency and gover­nance and the require­ments for data sover­eignty and technical indepen­dence.

Regulatory future-proofing as a core principle of DeepVA

As the AI Act is still in a phased imple­men­tation process and supple­mentary harmo­nized standards, guide­lines, and inter­pre­tative aids from the super­visory author­ities are expected in the coming years, we expressly regard our current level of compliance as an interim stage in a continuous process of adaptation and improvement. We are closely monitoring the further concretization of the legal framework, assessing its impact on our platform and our internal processes, and plan to translate our conceptual groundwork into robust, verifiable evidence as soon as the relevant standards and proce­dures are finalized.
Our aim is to offer DeepVA, an AI platform that is not only functional but also future-proof in regulatory terms. AI Act compliance, data protection in accor­dance with the GDPR, and a high level of infor­mation security are integral compo­nents of our product and company. These are precisely the advan­tages that we, as a European provider, can derive from EU regulation: legal certainty for our users and AI solutions that create the necessary basis of trust for sensitive areas without creating depen­dence on tech corpo­ra­tions.
As of December 2025

Share

Email
LinkedIn
Facebook
Twitter
Search

Table of Contents

latest AI news

Subscribe to our newsletter

Don’t worry, we reserve our newsletter for important news, so we only send a few updates once in a while. No spam!