Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Denmark is expected to save the equivalent of 30,000 full-time positions in the public sector with the help of artificial intelligence. But the drive for efficiency is moving slowly and risks coming at the expense of citizens’ data security. Across municipalities and government agencies, AI chatbots are already being used extensively, often without clear guidelines or solid knowledge of GDPR. The result is a digital race in which ambitions are high, but the protection of sensitive information is lagging behind.
A heavy burden rests on the public sector in Denmark. And no, this is not about waiting lists for psychiatric treatment or a lack of nursery staff. It is about artificial intelligence, and it is about the fact that, by 2035, civil servants and employees in the state, regions and municipalities must work out how the equivalent of 30,000 full-time positions can be replaced by artificial intelligence.
That is quite a burden. Especially because things are not moving as fast as many had hoped. The latest figures from the municipalities show that after three years of generative AI, a total of 34.35 full-time equivalents have been saved and 29 of those come from Næstved Municipality.
In other words, public employees need to get their act together if digital Denmark is to stay on the AI bandwagon. And that puts considerable pressure on each individual employee to get started with artificial intelligence – whatever it takes.
Enormous Potential for Efficiency – but…
There is indeed potential to save time with AI in the public sector.
The problem with all these useful functions is that they only become truly useful when the chatbots are fed with generous quantities of data about the citizens that the public sector serves. And that obviously creates problems.
What Happens to our Data?
First, it is unclear where this data ends up. If you use a free version of an AI chatbot, you can be fairly certain that your data will be used to train future models, which means that personal and other sensitive public data may end up becoming publicly accessible.
Second, in the vast majority of cases you can be sure that the data ends up on servers located outside the EU, which means that you are breaching GDPR the very moment you input a national ID number or any other sensitive personal data into an AI chatbot.
Real Savings Require Real Data
When I teach responsible use of AI, I always preach that you should not feed artificial intelligences with any data other than that which is already freely available on the company’s or organisation’s public website. The problem is that you do not save the equivalent of 30,000 full-time positions if you are only allowed to use information from the website to feed the hungry AIs. The real savings only come if you feed them with real data.
Massive GDPR Breaches Every Day
And that is precisely what is happening across the public sector in Denmark. There are no comprehensive statistical studies of this yet, but after travelling around the country for two years, teaching and giving talks, it is my clear impression that: 1) AI chatbots are being used extensively; 2) the general level of knowledge about data risks is very low; and 3) there is a glaring lack of rules, guidelines, training and access to secure AI systems. A combination which likely means that there are hundreds – perhaps thousands – of GDPR breaches every single day, because public sector employees are doing what they have been asked to do: trying to become more efficient with generative AI.
My suspicion is supported by the fact that the Danish Data Protection Agency has also observed “an increase in the number of personal data security breaches where employees have used personal data from their work as input and prompts in the tools and solutions”. It is also backed up by digitalisation consultant Jesper Rukshan, who told the engineering journal Ingeniøren that “there are public employees who have used ChatGPT for citizen cases despite a ban on sharing sensitive personal information”.
Who is Responsible?
In principle, the companies behind AI products ought to take responsibility for the fact that their solutions so strongly incentivise people to break the law. But that is not how things work in the world of tech giants. I recently met OpenAI’s Chief Strategy Officer Jason Kwon and asked him whether OpenAI had any responsibility for preventing sensitive data from the Danish public sector from likely being processed on a large scale in their systems. His answer was that it was technically difficult to prevent and therefore not their responsibility. If we wanted to stop that kind of illegality, he said, we would have to focus on training employees.
The tech giants are far too busy fighting for world domination to worry about Danish public sector employees breaching a few GDPR rules here and there. So we probably have to accept that the responsibility rests on our own shoulders – that is, on politicians and civil servants in the public sector.
The Solutions
And Jason Kwon is right that we need more training for employees. Far too many courses and training programmes in AI focus on prompting and tips and tricks. We need a much stronger emphasis on the ethical, legal and technical limitations of AI.
But we also need much better and more comprehensible guidelines for employees. Many organisations already have such guidelines, but it is clear that they are not being followed – perhaps because employees have not been involved in developing them.
Many companies and public bodies have started building their own AI solutions. That is certainly a good approach, but it is expensive and not always as good as the solutions from the tech giants. And it rarely reaches small businesses and small municipalities. We need more shared and secure AI solutions in both the private and public sectors.
Last but not least, we are starting to see Danish and European AI companies offering GDPR-compliant AI chatbots (e.g. syv.ai and haime.ai) where security is built into the systems from the outset. The public sector in particular should orient itself much more towards these kinds of solutions rather than blindly buying from Microsoft or OpenAI.
Who Cares?
Some might argue that all this hand-wringing about data is a storm in a teacup. GDPR is just boring EU bureaucracy that gets in the way of innovation and efficiency in the Danish public sector. But think about how you would feel if your private conversation with your psychologist, your GP or your caseworker were circulating freely on servers around the world, where it could in principle be sold to and used by the highest bidder.
We are already used to massive surveillance from social media, but if our public sector data also becomes part of the grinding gears of surveillance capitalism, there will be very few places left where we can truly be private.
[Article translated from Danish with help from AI. No private data was uploaded in the process]
You can read the article in Danish here.
Photo: RENXIN PAN, Unsplash.com
Also read Use Chatbots With Caution
Our monthly newsletter is all about data ethics & privacy with cases and new trends .Sign up at the bottom of the website.
Any questions, comments or may be you want to contribute to it? Do mail to info@dataethics.eu
See some examples of our newletter here
DataEthics is a politically independent ThinkDoTank based in Denmark with a European (and global) outreach.
ThinkDoTank Dataethics
Get in touch
Privacy policy
© Dataetisk Tænkehandletank 2025. All rights reserved.