AI: Climate killer, bearer of hope — and what about ethics?

The VFM’s spring conference shed light on the tension between artificial intel­li­gence, sustain­ability and ethics — with surprising findings and a clear appeal to media profes­sionals — three exciting opening presen­ta­tions opened up an exciting space for thought and concrete approaches to action.

At the Biosphäre Potsdam, where an artificial ecosystem is kept in balance by digital technology, the vfm e.V. opened this year’s spring conference with a key question of our time: How does the rapid rise of artificial intel­li­gence fit in with ecological sustain­ability and ethical respon­si­bility? The experts’ answers painted a multi-layered picture — with direct conse­quences for everyone who works with infor­mation, digital tools and journal­istic standards on a daily basis.

Digital and on-site experts from all media companies were present and joined in the discussion

Technology, nature and ethics: no contra­diction

Nico Franzen, Managing Director of the Biosphäre Potsdam, made a surprising statement in his welcoming address: “What we are experi­encing here in our climate zones is a harmo­nious symbiosis between technology and nature.” Sensors, algorithms and automated control systems maintain the delicate balance of the artificial rainforest — a model that can also be applied to the handling of knowledge and media. Just as an ecosystem needs care, data and content also require conscious, sustainable and respon­sible handling. AI can help to make content acces­sible here — but the origin and respon­si­bility remain human.

This respon­si­bility concerns not only energy consumption, but also ethical issues relating to the use of AI. How much ethics does artificial intel­li­gence need — and how much of it do we need? This question was the focus of the lecture by Prof. Dr. Sabine Ammon(TU Berlin). For media profes­sionals, this provides valuable food for thought — and a clear appeal: help shape it instead of just using it!

Biosphäre Potsdam — A place of sustain­ability: ecology, education and technology are in balance here and ensure a climatic balance

The ecological downside of the AI revolution

The opening keynote was given by Prof. Dr. Marius Lindauer, AI expert at Leibniz Univer­sität Hannover, who linked the current devel­opment of AI models with their ecological impli­ca­tions. His figures made people sit up and take notice: Modern LLMs such as GPT‑4 or Gemini consume quantities of electricity during their devel­opment that are compa­rable to the annual require­ments of smaller cities. The devel­opment costs of more than 100 million US dollars per large-scale model flow to a consid­erable extent into energy costs. Although the figures vary depending on the study, they clearly show why such basic AI models are usually only made possible by the financial clout of Big Tech and how some providers even go beyond the legal framework.

“Global AI energy consumption is now on a par with that of entire indus­tri­alized nations,” reported Lindauer. “We are talking about amounts of electricity that are compa­rable to the total consumption of countries such as Spain or Japan.” Partic­u­larly problematic: billions of liters of water are used to cool the data centers — sometimes in regions that already suffer from water shortages, such as California, the home of Bigtec in Silicon Valley.

An MIT study from 2024 puts the carbon footprint of a single ChatGPT prompt at around 9 grams — compa­rable to printing a page of paper. With millions of daily requests, this adds up to a signif­icant ecological factor.

What is partic­u­larly worrying is that even more efficient technologies do not automat­i­cally solve the problem. Lindauer referred to the Jevons paradox — an economic phenomenon in which increases in efficiency lead not to less, but to more resource consumption, because use increases exponen­tially due to the supposed sustain­ability.

Jevons paradox — an example

Imagine replacing conven­tional light bulbs with energy-efficient LED lamps. These consume only a fraction of the electricity for the same light output.

Expec­tation: Electricity consumption for lighting is reduced because each lamp requires less energy.

What actually happens (Jevons paradox): Because the light is now much more favorable:

  • If more lamps are installed (e.g. for atmos­pheric lighting, outdoor lighting, continuous operation).
  • People leave the light on more often or for longer.
  • New lighting concepts (e.g. smart systems) lead to further consumption.

 

Conse­quence: The total electricity consumption for lighting can even increase despite higher efficiency, because the lower price per lighting unit increases the incentive to use more.

In the past, people were happy if their Nokia cell phone lasted two days on a single charge. Today, smart­phones and their batteries are much more powerful and energy-efficient — but the battery still usually only lasts one day. The reason: as efficiency has increased, so has usage — and above all the complexity of the computing processes.

“What we experi­enced with smart­phones is repeating itself with AI,” warned Lindauer. “The more powerful and acces­sible the technology becomes, the more naturally and frequently we use it — even for tasks that could be solved using much more resource-efficient methods.”

The audience also responded actively to Prof. Lindauer’s presen­tation

Here is another example from our daily work: Although you could generate tags for tagging with our DeepVA Object and Scene Recog­nition, you are naturally inclined to solve this with state-of-the-art tools such as our DeepVA Visual Under­standing — a function that is based on a VLM and is therefore much more compu­ta­tionally intensive. Such consid­er­a­tions and trade-offs must be made when it comes to AI and sustain­ability — however, our functions are very efficient and do not require large data centers, but mostly run in existing infra­structure.

AI in media practice: an oppor­tunity with a double bottom

Whether automatic transcription, news aggre­gation or content gener­ation — AI has long been part of everyday editorial life. However, as Prof. Ammon empha­sized, the real challenge lies not in the tool itself, but in the practices that develop around it. “Most problems are not caused by the artifact, but by the culture of appli­cation,” said Ammon. It is not the result of the algorithms that is the problem, but the context in which the result is placed. Media profes­sionals must learn to translate existing ethical guide­lines such as trans­parency or fairness into the respective context and put them into practice.

This means:

  • Plausi­bility check: Is the AI result compre­hen­sible?

    Can I recognize how the AI arrived at this result?

  • Reflection compe­tence: Does the team have the knowledge to categorize it?

    Does the editorial team know enough about AI to evaluate the result correctly?

  • Domain ethics: Are profession-specific values such as freedom of the press taken into account?

    Do we ensure that basic journal­istic rules are observed when using AI?

  • Functional trans­parency: Can journalists explain how the result was achieved?

    Can we make it clear to our readers how and why the AI produced this result?

This requires so-called epistemic sover­eignty — in other words, the ability to translate AI output into journal­is­ti­cally sound judgments. Journalists should not only be able to accept the results of AI, but also to question, evaluate and place them in the right context. It is not enough to trust the technology — you have to know what it does, why it does it and how reliable it is in order to be able to deal with it ethically. Prof. Ammon did not question the many advan­tages of the technology.

Prof. Ammon made specific recom­men­da­tions for the use of AI in everyday working life

Sustain­ability through design — not by doing without

Despite the sobering results at first glance, Lindauer also clearly sees the positive potential: AI can help to save resources through optimized logistics, intel­ligent production control and efficient infor­mation processing and is already in use in many places. However, the decisive factor here is also “how often, for what and how consciously”. Instead of relying on gigamodels and cloud solutions, smaller, locally operated systems could also perform many tasks efficiently — with signif­i­cantly lower resource consumption.

Prof. Ammon also calls for media profes­sionals to play an active role in shaping the topic of AI. Ethics should not be an after­thought, but must be an integral part of the devel­opment, selection and use of tools and must be addressed often enough by users to be heard. Active co-design — e.g. through employee training, self-developed guide­lines or feedback during the devel­opment of tools together with the devel­opers — is not an optional extra, but a duty.

Media work with respon­si­bility

For media profes­sionals, this results in a differ­en­tiated picture of their role: on the one hand, AI tools can make work easier. On the other hand, every use carries an ecological and ethical baggage. A survey conducted by the Reuters Institute in 2024 shows that 78% of media companies use AI regularly, but only 12% take sustain­ability criteria into account. The discrepancy between awareness and action seems consid­erable in this study.

Specific recom­men­da­tions for action were therefore also made at the spring conference:

  • Delib­erate use of AI:

    Not all research, not all text creation, not all image editing requires AI support. The question “Is a performance-hungry model really necessary here?” should become part of the profes­sional self-image.

  • Competent selection:

    The more precisely the require­ments are formu­lated, the more efficiently AI can work. Media profes­sionals should learn to design prompts in such a way that they achieve optimal results with minimal computing power. The input language often plays a major role — English, for example, is usually more efficient.

  • Media education:

    Anyone who uses AI must also be able to explain its limita­tions — not only in footnotes, but also in the narrative. Without this knowledge, you are entering into a blind depen­dency.

  • Alter­native technologies:

    Local, specialized AI solutions can be suffi­cient for many tasks and save consid­erable resources compared to cloud-based services.

Prof. Ammon appealed not only to use, but also to reflect on the respective use

Compre­hensive awareness in dealing with AI also appears to generate sustainable added value in figures: according to a study by the Boston Consulting Group, media companies could reduce their digital carbon footprint by up to 40% by consciously selecting AI — without compro­mising on quality. And the geopo­litical upheavals of recent months did not play a role in this study — these are still to come.

Conclusion: AI needs attitude — and active design

The vfm’s spring conference made it unmis­takably clear that artificial intel­li­gence is somewhere between a climate killer and a beacon of hope, between a neutral tool and a socially effective medium. Where the journey takes us is not a purely technical question, but a creative one — and is largely in the hands of media profes­sionals.

The AI Act may set the legal framework, but ethical practice is decided in everyday editorial work: in the selection of tools, in the reflective handling of results and in respon­si­bility towards the audience and the environment.

A new under­standing of technology is therefore required: the benchmark should not be what is techni­cally feasible, but what is respon­sibly targeted. Those who work with AI do not have to do every­thing that is possible — but do the right thing because it makes sense and is justi­fiable.
Prof. Lindauer therefore concluded by asking hereti­cally whether his intro­ductory slide of AI-generated self-portraits was worth the CO2 emissions just to attract the audience’s attention — a good question.

At the spring conference, there was a lot of discussion about the big issues, but the small impulses also provided food for thought about sustain­ability and ecology

Share

Email
LinkedIn
Facebook
Twitter
Search

Table of Contents

latest AI news

Subscribe to our newsletter

Don’t worry, we reserve our newsletter for important news, so we only send a few updates once in a while. No spam!