Intel­ligent research: See how the Knowledge Graph puts your knowledge to work

To get into the topic, it is useful to have some basic knowledge about graphs first. A knowledge graph is a struc­tured repre­sen­tation of infor­mation that is presented in the form of entities (objects, people, or concepts) and their relation­ships to each other. The knowledge graph receives the data from a database, for example Wikidata, as an open source, news agencies, social media or editorial databases. If these are then displayed in a knowledge graph, you get the familiar black­board image from countless crime films. All the people on the board are connected by red threads, making their relation­ships and networks visible, but the knowledge graph takes this to a new level with thousands of links and the ability to apply further algorithms to them. Basically a 3D black­board with a built-in search.

A practical example from the editorial environment

I am writing a current story about new research on a meeting of several people in Potsdam who may not have been in contact with each other before. The Knowledge Graph, fed from a database, can search out all the links between the two people within seconds and thus perhaps provide research approaches for a multi­di­men­sion­ality of the story. Perhaps the two people were enrolled at the same university or had already met on another occasion?

A knowledge graph can provide knowledge that an editor has to learn and for which a great deal of learning effort is required for knowledge transfer in the editorial department in a very short time.
There is a risk that individual connec­tions may remain undis­covered or be forgotten. The knowledge graph facil­i­tates this transfer, it can make knowledge about objects, people, and entities easily trans­ferable, and not just for people.

Large language models (LLM) such as ChatGPT or Bard can provide incorrect infor­mation or hallu­cinate.
They may be able to handle the language and context, but the under­lying infor­mation is misleading or the facts are made up by the LLM. In an environment where we thrive on accurate infor­mation and where it is a top priority, we need relia­bility. And that is exactly what the combi­nation of an LLM that bases its infor­mation on a knowledge graph provides.

Whereas LLM training requires an extremely large number of texts as training material in order to create language compre­hension at all, the knowledge graph can be used to learn the connec­tions between different pieces of infor­mation and the verified facts, free from trained fuzziness or errors.

Knowledge Graphs Data analysis example

KNOWLEDGE GRAPH

A knowledge graph connects knowledge by repre­senting infor­mation in the form of points (entities such as objects or people) and linking them with lines (relation­ships). The points represent individual elements of knowledge, while the lines show the relation­ships between them.

But this is just the beginning: processing in a knowledge graph offers further analysis options.

Machine learning makes other corre­la­tions visible that are difficult to access for human analysis. For example: “Which actresses under 1.60 cm have won an Oscar?” A research that would be more time-consuming without the database, but thanks to the graph only takes a few seconds.

The combi­nation of knowledge graphs and LLMs enables the creation of specialized LLMs that not only master language, but can also generate precise infor­mation in any subject area — indis­pensable in future-proof companies or editorial offices.

This relia­bility is partic­u­larly important in light of upcoming regulatory require­ments such as the EU’s Artificial Intel­li­gence Act. Trans­parent and traceable LLMs based on knowledge graphs are better able to fulfill the prescribed rules, as they are more trust­worthy and 100% traceable due to their contextual and verified data.

How can such a knowledge graph find its way into journal­istic processes — a use case with CGI’s Open Media.

CGI OpenMedia meets DeepVA

The CGI OpenMedia Newsroom Computer System (NRCS) acts as a collab­o­rative platform for journalists, providing the infor­mation, function­ality, and trans­parency needed to meet the challenges of daily news production. It ensures maximum speed and scala­bility for journal­istic workflows in broadcast news production, from wire search to scripting to playout control.

DeepVA’s integration with OpenMedia enables the seamless use of DeepVA’s intel­ligent features without the need to manage multiple appli­ca­tions. Journalists can quickly and easily obtain relevant infor­mation to support, improve and accel­erate their daily research. DeepVA identifies people in the body text of an OpenMedia story and displays them along with inter­esting personal data such as a short description, date of birth and possible date of death. It also suggests similar OpenMedia stories and lists them in the current story with important details, allowing the user to automat­i­cally access all the details.

Share

Email
LinkedIn
Facebook
Twitter
Search

Table of Contents

latest AI news

Subscribe to our newsletter

Don’t worry, we reserve our newsletter for important news, so we only send a few updates once in a while. No spam!