Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
The Ministry of Communication and Information Technology is preparing regulations that require AI content to be labeled. How effective will the regulations, aimed to be published in 2026, be in countering "deepfake"?
Share
Copy Link
By Abdullah Fikri Ashri
28 Jan 2026 12:39 WIB · English
In the current era of imitation intelligence or AI, it is not easy to distinguish between original content and AI-generated results. The Ministry of Communication and Digital (Kemkomdigi) is preparing regulations that require AI content to carry labels. How effective will this regulation be in addressing “deepfake”?
The plan to create the regulation emerged during a Working Meeting of Commission I of the DPR with the Minister of Communication and Information, Meutya Hafid, at the Parliament Complex, Senayan, Jakarta, on Monday (26/1/2026). The meeting, which was broadcast online, was attended by the leadership and members of Commission I of the DPR as well as echelon I officials from the Ministry of Communication and Information.
The Director General of Digital Ecosystem at the Ministry of Communication and Information Technology, Edwin Hidayat Abdullah, stated that the ministry is preparing ministerial regulations regarding the use of AI for electronic system organizers (PSE). PSE refers to the managers of electronic systems, such as websites or social media.
“(The ministerial regulation in question) requires that generative AI content produced be watermarked. We’re currently drafting this. So, don’t be confused about whether this is AI-generated content or not,” Edwin told members of Commission I of the Indonesian House of Representatives.
With this regulation, AI service providers are required to label every output produced by generative AI. Generative artificial intelligence refers to the use of AI to create content, such as text, images, audio, and video. Examples include ChatGPT, Google Gemini, and Grok AI.
“When (AI content) appears on electronic system providers, such as YouTube or social media, without an AI label, the content can be taken down. So, this is one of the rules we’ve designed,” Edwin explained.
Until now, much AI-generated content has been unmarked. In fact, some parties are using AI to create deepfake content or manipulate images, sounds, and even videos. Simply by typing a text command (prompt), anyone can create a deepfake.
This technology allows for the manipulation of expressions and speech. Thus, an individual is depicted as doing or saying something that they have never actually done or said. However, because the results appear very realistic, many people are deceived and consider it to be true.
Citizens and even presidents can fall victim to deepfakes. Last year, for example, a video circulated of Finance Minister Sri Mulyani doctored to say, “teachers are a burden on the country.” There was even a video of President Prabowo Subianto fluent in Arabic and Mandarin.
Deepfake content can also create images of people wearing bikinis or even naked, as seen on Grok, an AI chatbot on the social media platform X (formerly Twitter). The proliferation of non-consensual sexual deepfake content led the Ministry of Communication and Information Technology to temporarily block Grok on January 10.
If the Komdigi ministerial regulation regarding AI is issued, the spread of deepfake content can be anticipated. According to Edwin, this regulation will require any unlabeled AI content to be taken down. Sanctions for AI developers are already stipulated in other regulations.
These two draft presidential regulations have been included in the priority presidential regulations that will be signed by the president in 2026.
The regulation is Law Number 1 of 2024 concerning the Second Amendment to Law No. 11/2008 on Information and Electronic Transactions (ITE Law). However, there is no specific article in that regulation that mentions artificial intelligence.
Edwin did not disclose when the ministerial regulation regarding AI content labeling would be issued. However, the regulation will be introduced after Prabowo publishes two presidential regulations titled the Artificial Intelligence (AI) Roadmap and AI Ethics.
“These two drafts of the presidential regulation have been included in the priority presidential regulations that will be signed by the president in 2026,” he stated. Edwin explained that the roadmap for AI includes regulations on the use of AI in 10 sectors, such as food security, transportation, logistics, and finance.
The roadmap will also support the implementation of Prabowo’s priority programs, such as free nutritious meals, free health check-ups, and the red and white cooperatives. This regulation mandates the establishment of a task force that directs and aligns the implementation of its provisions.
The Presidential Decree on AI Ethics will regulate three parties: users, industry players, and regulators or the government. Users, such as netizens, must be careful when using AI. They should avoid sharing personal data, such as the contents of their ID cards, with chatbots.
Industry players or AI technology developers are also required to protect citizens using AI to prevent data breaches. Each ministry and agency must regulate the utilization of AI in their respective sectors.
The ethics of AI, it is hoped, can anticipate three major risks in the utilization of AI in Indonesia. First, the risk of widening social disparities. Schools with complete digital infrastructure, for example, can utilize AI more effectively compared to schools with minimal facilities.
The second risk is a violation of user privacy. Third, the risk of using deepfakes for criminal purposes, which has been on the rise recently. Therefore, to anticipate these risks, KA ethics will regulate users, industries, and institutions.
The Minister of Communication and Information, Meutya Hafid, stated that the draft of the White Paper on the Roadmap and Ethics of Artificial Intelligence has been prepared in 2025 and is targeted to become a presidential regulation this year. The Government Regulation (PP) derived from Law Number 27 of 2022 concerning Personal Data Protection is also expected to be completed by early 2026.
“While waiting for the (PP and Perpres) to be signed, we have already prepared or are currently discussing a draft regulation. So, once it’s signed, the first regulation to be issued will require platforms to label or watermark AI content,” said Meutya.
In addition to awaiting the issuance of regulations related to AI, they are also tightening supervision of PSE to strengthen digital governance. As of December 2025, there are 3,805 registered PSE at the Ministry of Communication and Information Technology. A total of 61 warnings have been issued to PSE to register for compliance.
“Of the 61 warning letters, most have finally registered, including companies like OpenAI (ChatGPT),” he said.
Meutya said the blocking sanction against Grok is still in effect. Her party is also awaiting confirmation of Grok’s compliance.
Mulyadi, a member of Commission I of the House of Representatives (DPR), stated that people are often confused about how to distinguish between AI-generated fake and genuine content. If this isn’t addressed, cybercrimes such as fraud, deepfakes, and online sexual crimes could occur and harm society.
The development of AI is progressing faster than the regulations being prepared. This is what we are concerned about. If the ministry only provides regulations for a certain level, it (AI) has already created innovations that surpass that.
“The development of AI is progressing faster than the regulations being prepared. This is what we are concerned about. If the ministry only provides regulations for a certain level, it (AI) has already created innovations that surpass that. (The development of AI) is an issue that cannot be hindered,” he stated.
Moreover, deepfake content is becoming increasingly prevalent. The Empowering Indonesia Report 2025, themed “Building Bridges of Tomorrow,” noted that deepfake content jumped 1,550 percent between 2023 and 2024. The report was launched in late October 2025 by Indosat Ooredoo Hutchison (Indosat/IOH) and Twimbit, a research and consulting firm.
On the other hand, Indonesia has become the third-largest country in terms of ChatGPT users, a large language model (LLM) based AI technology, following China and India in 2024. A total of 129 million people, or 45 percent of the total population, are reported to be actively using ChatGPT every week.
The Chairman of the Presidium of the Indonesian Anti-Slander Society (Mafindo), Septiaji Eko Nugroho, urged the government to promptly establish regulations regarding AI, even in the form of legislation. South Korea, for instance, has a law that stipulates fines for AI-generated content that is not labeled.
The law, fully titled The Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness, also encourages research, talent training, and startup support. The European Union also has an AI law that will come into full effect in 2027.
Indonesia, he said, can learn from countries that already have AI regulations. According to Septiaji, the rules related to AI should at least encompass four major points. First, content transparency by requiring platforms to label content generated by AI.
“Thus, the burden of distinguishing between synthetic and authentic content no longer falls on the community, but on social media platforms and AI content creators,” he stated. Secondly, AI regulations must be risk-based. To anticipate the risk of data breaches, for instance, platforms are required to report protected data.
Third, AI regulations must strike a balance between innovation and security. For example, startups can test their technology in limited environments without being fully bound by burdensome regulations. Regulations must also differentiate between large companies, medium-sized companies, and academics.
Fourth, regulations must support local journalism and content creators. AI platforms that utilize data from journalism and works created by the nation’s children, for instance, are required to provide support and compensation. In this way, journalism and the authentic content of the community are preserved.
According to Septiaji, regulations can prevent the spread of deepfakes and provide legal certainty to AI developers and investors. However, regulations will only be effective if they are implemented. “Furthermore, AI ethics education is also urgent,” he said.
In the current era of imitation intelligence or AI, it is not easy to distinguish between original content and AI-generated results. The Ministry of Communication and Digital (Kemkomdigi) is preparing regulations that require AI content to carry labels. How effective will this regulation be in addressing “deepfake”?
The plan to create the regulation emerged during a Working Meeting of Commission I of the DPR with the Minister of Communication and Information, Meutya Hafid, at the Parliament Complex, Senayan, Jakarta, on Monday (26/1/2026). The meeting, which was broadcast online, was attended by the leadership and members of Commission I of the DPR as well as echelon I officials from the Ministry of Communication and Information.
The Director General of Digital Ecosystem at the Ministry of Communication and Information Technology, Edwin Hidayat Abdullah, stated that the ministry is preparing ministerial regulations regarding the use of AI for electronic system organizers (PSE). PSE refers to the managers of electronic systems, such as websites or social media.
“(The ministerial regulation in question) requires that generative AI content produced be watermarked. We’re currently drafting this. So, don’t be confused about whether this is AI-generated content or not,” Edwin told members of Commission I of the Indonesian House of Representatives.
With this regulation, AI service providers are required to label every output produced by generative AI. Generative artificial intelligence refers to the use of AI to create content, such as text, images, audio, and video. Examples include ChatGPT, Google Gemini, and Grok AI.
“When (AI content) appears on electronic system providers, such as YouTube or social media, without an AI label, the content can be taken down. So, this is one of the rules we’ve designed,” Edwin explained.
Until now, much AI-generated content has been unmarked. In fact, some parties are using AI to create deepfake content or manipulate images, sounds, and even videos. Simply by typing a text command (prompt), anyone can create a deepfake.
This technology allows for the manipulation of expressions and speech. Thus, an individual is depicted as doing or saying something that they have never actually done or said. However, because the results appear very realistic, many people are deceived and consider it to be true.
Citizens and even presidents can fall victim to deepfakes. Last year, for example, a video circulated of Finance Minister Sri Mulyani doctored to say, “teachers are a burden on the country.” There was even a video of President Prabowo Subianto fluent in Arabic and Mandarin.
Deepfake content can also create images of people wearing bikinis or even naked, as seen on Grok, an AI chatbot on the social media platform X (formerly Twitter). The proliferation of non-consensual sexual deepfake content led the Ministry of Communication and Information Technology to temporarily block Grok on January 10.
If the Komdigi ministerial regulation regarding AI is issued, the spread of deepfake content can be anticipated. According to Edwin, this regulation will require any unlabeled AI content to be taken down. Sanctions for AI developers are already stipulated in other regulations.
These two draft presidential regulations have been included in the priority presidential regulations that will be signed by the president in 2026.
The regulation is Law Number 1 of 2024 concerning the Second Amendment to Law No. 11/2008 on Information and Electronic Transactions (ITE Law). However, there is no specific article in that regulation that mentions artificial intelligence.
Edwin did not disclose when the ministerial regulation regarding AI content labeling would be issued. However, the regulation will be introduced after Prabowo publishes two presidential regulations titled the Artificial Intelligence (AI) Roadmap and AI Ethics.
“These two drafts of the presidential regulation have been included in the priority presidential regulations that will be signed by the president in 2026,” he stated. Edwin explained that the roadmap for AI includes regulations on the use of AI in 10 sectors, such as food security, transportation, logistics, and finance.
The roadmap will also support the implementation of Prabowo’s priority programs, such as free nutritious meals, free health check-ups, and the red and white cooperatives. This regulation mandates the establishment of a task force that directs and aligns the implementation of its provisions.
The Presidential Decree on AI Ethics will regulate three parties: users, industry players, and regulators or the government. Users, such as netizens, must be careful when using AI. They should avoid sharing personal data, such as the contents of their ID cards, with chatbots.
Industry players or AI technology developers are also required to protect citizens using AI to prevent data breaches. Each ministry and agency must regulate the utilization of AI in their respective sectors.
The ethics of AI, it is hoped, can anticipate three major risks in the utilization of AI in Indonesia. First, the risk of widening social disparities. Schools with complete digital infrastructure, for example, can utilize AI more effectively compared to schools with minimal facilities.
The second risk is a violation of user privacy. Third, the risk of using deepfakes for criminal purposes, which has been on the rise recently. Therefore, to anticipate these risks, KA ethics will regulate users, industries, and institutions.
The Minister of Communication and Information, Meutya Hafid, stated that the draft of the White Paper on the Roadmap and Ethics of Artificial Intelligence has been prepared in 2025 and is targeted to become a presidential regulation this year. The Government Regulation (PP) derived from Law Number 27 of 2022 concerning Personal Data Protection is also expected to be completed by early 2026.
“While waiting for the (PP and Perpres) to be signed, we have already prepared or are currently discussing a draft regulation. So, once it’s signed, the first regulation to be issued will require platforms to label or watermark AI content,” said Meutya.
In addition to awaiting the issuance of regulations related to AI, they are also tightening supervision of PSE to strengthen digital governance. As of December 2025, there are 3,805 registered PSE at the Ministry of Communication and Information Technology. A total of 61 warnings have been issued to PSE to register for compliance.
“Of the 61 warning letters, most have finally registered, including companies like OpenAI (ChatGPT),” he said.
Meutya said the blocking sanction against Grok is still in effect. Her party is also awaiting confirmation of Grok’s compliance.
Mulyadi, a member of Commission I of the House of Representatives (DPR), stated that people are often confused about how to distinguish between AI-generated fake and genuine content. If this isn’t addressed, cybercrimes such as fraud, deepfakes, and online sexual crimes could occur and harm society.
The development of AI is progressing faster than the regulations being prepared. This is what we are concerned about. If the ministry only provides regulations for a certain level, it (AI) has already created innovations that surpass that.
“The development of AI is progressing faster than the regulations being prepared. This is what we are concerned about. If the ministry only provides regulations for a certain level, it (AI) has already created innovations that surpass that. (The development of AI) is an issue that cannot be hindered,” he stated.
Moreover, deepfake content is becoming increasingly prevalent. The Empowering Indonesia Report 2025, themed “Building Bridges of Tomorrow,” noted that deepfake content jumped 1,550 percent between 2023 and 2024. The report was launched in late October 2025 by Indosat Ooredoo Hutchison (Indosat/IOH) and Twimbit, a research and consulting firm.
On the other hand, Indonesia has become the third-largest country in terms of ChatGPT users, a large language model (LLM) based AI technology, following China and India in 2024. A total of 129 million people, or 45 percent of the total population, are reported to be actively using ChatGPT every week.
The Chairman of the Presidium of the Indonesian Anti-Slander Society (Mafindo), Septiaji Eko Nugroho, urged the government to promptly establish regulations regarding AI, even in the form of legislation. South Korea, for instance, has a law that stipulates fines for AI-generated content that is not labeled.
The law, fully titled The Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness, also encourages research, talent training, and startup support. The European Union also has an AI law that will come into full effect in 2027.
Indonesia, he said, can learn from countries that already have AI regulations. According to Septiaji, the rules related to AI should at least encompass four major points. First, content transparency by requiring platforms to label content generated by AI.
“Thus, the burden of distinguishing between synthetic and authentic content no longer falls on the community, but on social media platforms and AI content creators,” he stated. Secondly, AI regulations must be risk-based. To anticipate the risk of data breaches, for instance, platforms are required to report protected data.
Third, AI regulations must strike a balance between innovation and security. For example, startups can test their technology in limited environments without being fully bound by burdensome regulations. Regulations must also differentiate between large companies, medium-sized companies, and academics.
Fourth, regulations must support local journalism and content creators. AI platforms that utilize data from journalism and works created by the nation’s children, for instance, are required to provide support and compensation. In this way, journalism and the authentic content of the community are preserved.
According to Septiaji, regulations can prevent the spread of deepfakes and provide legal certainty to AI developers and investors. However, regulations will only be effective if they are implemented. “Furthermore, AI ethics education is also urgent,” he said.
Writer:
Editor: