The VFM’s spring conference shed light on the tension between artificial intelligence, sustainability and ethics — with surprising findings and a clear appeal to media professionals — three exciting opening presentations opened up an exciting space for thought and concrete approaches to action.
At the Biosphäre Potsdam, where an artificial ecosystem is kept in balance by digital technology, the vfm e.V. opened this year’s spring conference with a key question of our time: How does the rapid rise of artificial intelligence fit in with ecological sustainability and ethical responsibility? The experts’ answers painted a multi-layered picture — with direct consequences for everyone who works with information, digital tools and journalistic standards on a daily basis.

Technology, nature and ethics: no contradiction
Nico Franzen, Managing Director of the Biosphäre Potsdam, made a surprising statement in his welcoming address: “What we are experiencing here in our climate zones is a harmonious symbiosis between technology and nature.” Sensors, algorithms and automated control systems maintain the delicate balance of the artificial rainforest — a model that can also be applied to the handling of knowledge and media. Just as an ecosystem needs care, data and content also require conscious, sustainable and responsible handling. AI can help to make content accessible here — but the origin and responsibility remain human.
This responsibility concerns not only energy consumption, but also ethical issues relating to the use of AI. How much ethics does artificial intelligence need — and how much of it do we need? This question was the focus of the lecture by Prof. Dr. Sabine Ammon(TU Berlin). For media professionals, this provides valuable food for thought — and a clear appeal: help shape it instead of just using it!

The ecological downside of the AI revolution
The opening keynote was given by Prof. Dr. Marius Lindauer, AI expert at Leibniz Universität Hannover, who linked the current development of AI models with their ecological implications. His figures made people sit up and take notice: Modern LLMs such as GPT‑4 or Gemini consume quantities of electricity during their development that are comparable to the annual requirements of smaller cities. The development costs of more than 100 million US dollars per large-scale model flow to a considerable extent into energy costs. Although the figures vary depending on the study, they clearly show why such basic AI models are usually only made possible by the financial clout of Big Tech and how some providers even go beyond the legal framework.
“Global AI energy consumption is now on a par with that of entire industrialized nations,” reported Lindauer. “We are talking about amounts of electricity that are comparable to the total consumption of countries such as Spain or Japan.” Particularly problematic: billions of liters of water are used to cool the data centers — sometimes in regions that already suffer from water shortages, such as California, the home of Bigtec in Silicon Valley.
An MIT study from 2024 puts the carbon footprint of a single ChatGPT prompt at around 9 grams — comparable to printing a page of paper. With millions of daily requests, this adds up to a significant ecological factor.
What is particularly worrying is that even more efficient technologies do not automatically solve the problem. Lindauer referred to the Jevons paradox — an economic phenomenon in which increases in efficiency lead not to less, but to more resource consumption, because use increases exponentially due to the supposed sustainability.
Jevons paradox — an example
Imagine replacing conventional light bulbs with energy-efficient LED lamps. These consume only a fraction of the electricity for the same light output.
Expectation: Electricity consumption for lighting is reduced because each lamp requires less energy.
What actually happens (Jevons paradox): Because the light is now much more favorable:
- If more lamps are installed (e.g. for atmospheric lighting, outdoor lighting, continuous operation).
- People leave the light on more often or for longer.
- New lighting concepts (e.g. smart systems) lead to further consumption.
Consequence: The total electricity consumption for lighting can even increase despite higher efficiency, because the lower price per lighting unit increases the incentive to use more.
In the past, people were happy if their Nokia cell phone lasted two days on a single charge. Today, smartphones and their batteries are much more powerful and energy-efficient — but the battery still usually only lasts one day. The reason: as efficiency has increased, so has usage — and above all the complexity of the computing processes.
“What we experienced with smartphones is repeating itself with AI,” warned Lindauer. “The more powerful and accessible the technology becomes, the more naturally and frequently we use it — even for tasks that could be solved using much more resource-efficient methods.”

Here is another example from our daily work: Although you could generate tags for tagging with our DeepVA Object and Scene Recognition, you are naturally inclined to solve this with state-of-the-art tools such as our DeepVA Visual Understanding — a function that is based on a VLM and is therefore much more computationally intensive. Such considerations and trade-offs must be made when it comes to AI and sustainability — however, our functions are very efficient and do not require large data centers, but mostly run in existing infrastructure.
AI in media practice: an opportunity with a double bottom
Whether automatic transcription, news aggregation or content generation — AI has long been part of everyday editorial life. However, as Prof. Ammon emphasized, the real challenge lies not in the tool itself, but in the practices that develop around it. “Most problems are not caused by the artifact, but by the culture of application,” said Ammon. It is not the result of the algorithms that is the problem, but the context in which the result is placed. Media professionals must learn to translate existing ethical guidelines such as transparency or fairness into the respective context and put them into practice.
This means:
-
Plausibility check: Is the AI result comprehensible?
Can I recognize how the AI arrived at this result?
-
Reflection competence: Does the team have the knowledge to categorize it?
Does the editorial team know enough about AI to evaluate the result correctly?
-
Domain ethics: Are profession-specific values such as freedom of the press taken into account?
Do we ensure that basic journalistic rules are observed when using AI?
-
Functional transparency: Can journalists explain how the result was achieved?
Can we make it clear to our readers how and why the AI produced this result?
This requires so-called epistemic sovereignty — in other words, the ability to translate AI output into journalistically sound judgments. Journalists should not only be able to accept the results of AI, but also to question, evaluate and place them in the right context. It is not enough to trust the technology — you have to know what it does, why it does it and how reliable it is in order to be able to deal with it ethically. Prof. Ammon did not question the many advantages of the technology.

Sustainability through design — not by doing without
Despite the sobering results at first glance, Lindauer also clearly sees the positive potential: AI can help to save resources through optimized logistics, intelligent production control and efficient information processing and is already in use in many places. However, the decisive factor here is also “how often, for what and how consciously”. Instead of relying on gigamodels and cloud solutions, smaller, locally operated systems could also perform many tasks efficiently — with significantly lower resource consumption.
Prof. Ammon also calls for media professionals to play an active role in shaping the topic of AI. Ethics should not be an afterthought, but must be an integral part of the development, selection and use of tools and must be addressed often enough by users to be heard. Active co-design — e.g. through employee training, self-developed guidelines or feedback during the development of tools together with the developers — is not an optional extra, but a duty.
Media work with responsibility
For media professionals, this results in a differentiated picture of their role: on the one hand, AI tools can make work easier. On the other hand, every use carries an ecological and ethical baggage. A survey conducted by the Reuters Institute in 2024 shows that 78% of media companies use AI regularly, but only 12% take sustainability criteria into account. The discrepancy between awareness and action seems considerable in this study.
Specific recommendations for action were therefore also made at the spring conference:
-
Deliberate use of AI:
Not all research, not all text creation, not all image editing requires AI support. The question “Is a performance-hungry model really necessary here?” should become part of the professional self-image.
-
Competent selection:
The more precisely the requirements are formulated, the more efficiently AI can work. Media professionals should learn to design prompts in such a way that they achieve optimal results with minimal computing power. The input language often plays a major role — English, for example, is usually more efficient.
-
Media education:
Anyone who uses AI must also be able to explain its limitations — not only in footnotes, but also in the narrative. Without this knowledge, you are entering into a blind dependency.
-
Alternative technologies:
Local, specialized AI solutions can be sufficient for many tasks and save considerable resources compared to cloud-based services.

Comprehensive awareness in dealing with AI also appears to generate sustainable added value in figures: according to a study by the Boston Consulting Group, media companies could reduce their digital carbon footprint by up to 40% by consciously selecting AI — without compromising on quality. And the geopolitical upheavals of recent months did not play a role in this study — these are still to come.
Conclusion: AI needs attitude — and active design
The vfm’s spring conference made it unmistakably clear that artificial intelligence is somewhere between a climate killer and a beacon of hope, between a neutral tool and a socially effective medium. Where the journey takes us is not a purely technical question, but a creative one — and is largely in the hands of media professionals.
The AI Act may set the legal framework, but ethical practice is decided in everyday editorial work: in the selection of tools, in the reflective handling of results and in responsibility towards the audience and the environment.
A new understanding of technology is therefore required: the benchmark should not be what is technically feasible, but what is responsibly targeted. Those who work with AI do not have to do everything that is possible — but do the right thing because it makes sense and is justifiable.
Prof. Lindauer therefore concluded by asking heretically whether his introductory slide of AI-generated self-portraits was worth the CO2 emissions just to attract the audience’s attention — a good question.
