New debate about the reliability of AI chatbots in Germany – igor´sLAB

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
The current debate on the use of AI chatbots as a replacement for traditional search engines has reached a new level of intensity in Germany. This is based on recent surveys, several scientific studies and clear warnings from IT experts. The main concern is that many users accept answers without checking them, even though systems such as ChatGPT, Claude or Perplexity are not designed to be reliable research tools.
A survey of more than 1,000 people conducted by the digital association Bitkom shows that a significant proportion of the population now uses AI chatbots at least occasionally as a source of information. In the 16-29 age group in particular, usage has already reached a level that is partially replacing traditional search engines. At the same time, however, 42% of respondents stated that they had already received incorrect answers. Only just over half subsequently check whether the content provided is correct. This approach is leading to growing uncertainty about how reliable digital sources of information are overall.

The debate has been brought into sharper focus by a study conducted by the European Broadcasting Union in October 2025, in which around 700 queries were evaluated for each of several widespread AI services. The proportion of incorrect answers was similarly high in all the systems tested. A significant proportion of the results contained significant factual errors, while a further proportion were classified as inaccurate. The study also pointed to structural causes resulting from the way large language models work. These include the fact that the models not only rely on traditional sources, but also increasingly on AI-generated or qualitatively highly fluctuating content. As many texts on the internet are adopted without being carefully checked, this creates a cycle in which errors that have already become widespread are further amplified.
Additional risks arise from the fact that chatbots often present content in a tone of voice that suggests objective certainty, although the models reproduce patterns whose accuracy cannot be guaranteed. Personal assessments in source texts can also appear to be factual statements. Studies such as those conducted by Princeton University also indicate that AI-generated content is already appearing in some new entries on platforms such as the English-language Wikipedia, which further complicates the situation.
Against this backdrop, more and more voices from the scientific community are advising caution. Computer science professor Katharina Zweig emphasized in an interview that chatbots should not be used as research tools in the sense of a search engine. Rather, their use is only suitable where errors do not have serious consequences. This assessment is based on the basic functioning of the models, which process patterns and probabilities without being able to check the truth of individual statements.
The discussion shows how much the search behavior of many people has changed in a short space of time and what challenges this entails. On the one hand, AI systems offer new forms of access to information, while on the other, the risk of uncritical use is growing. The coming years will determine the extent to which protection mechanisms, quality controls and better user understanding will be required in order to take advantage of this technology without further jeopardizing the quality of information.
Current developments highlight the need for a conscious approach to AI-based information services. The trend towards using them as a replacement for search engines harbors risks, as errors often remain undetected and can be amplified in digital information spaces. Scientific studies and expert advice make it clear that the reliability of the answers is limited and careful verification remains essential.
Source: European Broadcasting Union (EBU), Princeton University

Beginne eine Diskussion
Kommentar
Lade neue Kommentare
Artikel-Butler
Der aktuelle Diskurs über den Einsatz von KI-Chatbots als Ersatz für klassische Suchmaschinen hat in Deutschland eine neue Intensität erreicht. Grundlage dafür sind aktuelle Umfragen, mehrere wissenschaftliche Untersuchungen und deutliche Warnungen von Informatik-Experten. Im Mittelpunkt steht die Sorge, dass viele Nutzer Antworten ungeprüft übernehmen, obwohl Systeme wie ChatGPT, Claude oder Perplexity nicht darauf ausgelegt sind, […] (read full article…)
Antwort Gefällt mir
Alle Kommentare lesen unter igor´sLAB Community →
Wir verwenden Technologien wie Cookies, um Geräteinformationen zu speichern und/oder darauf zuzugreifen. Wir tun dies, um das Surferlebnis zu verbessern und um personalisierte Werbung anzuzeigen. Wenn Sie diesen Technologien zustimmen, können wir Daten wie das Surfverhalten oder eindeutige IDs auf dieser Website verarbeiten. Wenn Sie Ihre Zustimmung nicht erteilen oder zurückziehen, können bestimmte Funktionen beeinträchtigt werden.

source

Scroll to Top