The AI Act, the law regulating artificial intelligence in the European Union, is currently in a crucial implementation phase. Some important provisions were postponed last month. So it’s time to explain our current status.
The European Commission recently proposed postponing the introduction of certain regulations for so-called “high-risk AI.” This is intended to allow more time for the development of technical standards and the associated compliance measures. This delay particularly affects the stricter requirements for AI systems that are considered particularly risky, such as those used in healthcare, law enforcement, and the justice system. This gives companies and public authorities more time to adapt to the new requirements and makes the legislation easier to implement in practice.
Government and advisors are currently working intensively on fine-tuning the implementation of the AI Act. The biggest challenges lie in clarifying the responsibilities of the many national authorities and practical application.
As long as these uncertainties remain, this naturally poses risks for the economy and businesses and tends to encourage a wait-and-see attitude. The delay in these provisions highlights the complexity of implementing the AI Act and underscores the need to maintain a balance between security and necessary innovation. With our numerous European customers in the media industry and public administration, there is no question that we will proactively engage with the AI Act, even if legal issues remain unresolved. Therefore, we provide a brief overview here:
DeepVA AI platform: Flexible, multimodal, and cloud-independent
With our DeepVA AI platform, we develop and operate AI solutions that are used productively by media companies, archives, public institutions, and other organizations in Europe. At the heart of this is our composite AI approach with a central API layer. This allows our customers to integrate multimodal AI models, proprietary DeepVA models, customer-specific models, open-source models, and third-party models into their workflows. Another key feature is that our platform can be operated independently of clouds in our own secure infrastructure.
AI Act compliance, information security, and data protection
The EU AI Act, which was published in the Official Journal in the summer of 2024 and has been in force since August 2024, will unfold its substantive obligations in several stages until 2027, particularly for high-risk systems. We began early on to systematically align our organization, platform architecture, and development processes with the requirements of the AI Act, and we view compliance as an essential part of our product promise. And this doesn’t just start with the AI Act, but even earlier, with information security: We already have an active information security management system in place that is based on the principles of ISO/IEC 27001:2022. Among other things, this ISMS includes defined roles and responsibilities, documented guidelines on information security and data protection, structured risk and action management, and defined processes for change and release management as well as for the management of suppliers and subcontractors. The controls anchored in our ISMS are continuously reviewed for effectiveness and integrated with the technical capabilities of our platform.
In terms of data protection, we work entirely on the basis of the General Data Protection Regulation and the relevant national regulations. From the outset, DeepVA has been designed to make data processing as data-efficient, transparent, and controllable as possible. Our platform runs on European infrastructure and supports comprehensive on-premises and offline operating scenarios, enabling sensitive workloads—such as sensitive transcriptions, translations, or facial recognition—to be processed entirely within the EU and, if desired, exclusively within the customer’s area of responsibility. Our customers can also rest assured when it comes to data sovereignty: their customer data will not be used for downstream, proprietary model or product development; data sovereignty remains entirely with our customers.
Transparency, control, and future-proof AI solutions
At the same time, we have placed the key requirements of the AI Act for AI systems at the center of our product development. This is the basis for our work: By using our platform, we minimize the concerns and effort associated with AI Act regulations and general data protection issues.
Among the requirements, the structured procedure for risk classification of AI functionalities based on the risk levels established in the AI Act is particularly important. Equally important are a systematic approach to data and model governance, including data quality requirements, the creation and maintenance of technical documentation for our systems, and logging concepts that support traceability, audits, and monitoring during ongoing operations.
Where our technology is used in areas of application that may be considered risky under the AI Act, we advise and support our customers in correctly classifying and implementing the relevant obligations – for example, with regard to conformity assessments, transparency obligations, or human oversight.
Transparency and control for our customers are a central element of our compliance approach. DeepVA offers configurable workflows, traceable results, and the ability to train and operate customer-specific models without becoming overly dependent on individual cloud providers or proprietary ecosystems. In this way, we support both the AI Act’s goals of strengthening transparency and governance and the requirements for data sovereignty and technical independence.
Regulatory future-proofing as a core principle of DeepVA
As the AI Act is still in a phased implementation process and supplementary harmonized standards, guidelines, and interpretative aids from the supervisory authorities are expected in the coming years, we expressly regard our current level of compliance as an interim stage in a continuous process of adaptation and improvement. We are closely monitoring the further concretization of the legal framework, assessing its impact on our platform and our internal processes, and plan to translate our conceptual groundwork into robust, verifiable evidence as soon as the relevant standards and procedures are finalized.
Our aim is to offer DeepVA, an AI platform that is not only functional but also future-proof in regulatory terms. AI Act compliance, data protection in accordance with the GDPR, and a high level of information security are integral components of our product and company. These are precisely the advantages that we, as a European provider, can derive from EU regulation: legal certainty for our users and AI solutions that create the necessary basis of trust for sensitive areas without creating dependence on tech corporations.
As of December 2025


