2022-2024 Media Monitoring: Documenting the Impacts of Artificial Intelligence (AI) in Indonesia

Read this post in Bahasa Indonesia

Media monitoring is one of EngageMedia’s initiatives in collecting reports of Artificial Intelligence (AI) incidents to understand how AI impacts our society. Included in this effort are monitoring recurring issues due to AI use in Indonesia, trends in AI incidents, related technology, and possible impacts on rights violations. Data was collected from various sources, including but not limited to social media (X, Facebook, Reddit, TikTok, Instagram), credible local/national news publications (e.g. Tempo and Kompas), and third parties that directly reported AI incidents.

The focus of this report and analysis includes:

  1. Incidents caused by the development, deployment, and usage of AI, either in partial or total capacity;
  2. Incidents that are of public interest;
  3. Incidents that impacted individuals/small groups/small and medium enterprises (SMEs);
  4. Incidents that caused material loss to the victims and/or their community in the following aspects: physical and bodily autonomy; mental and psychological; economy; human rights and civil liberties; and cultural and moral degradation.

This media monitoring compiles various AI incidents related to public interests throughout 2022-2025. With the 2025 monitoring still ongoing, this brief will focus on exposing results during 2022-2024.

Overall, a total of 29 incidents are suspected to have resulted from the deployment of AI throughout 2022-2024. From 2022 to 2023, six incidents were recorded in a situation where the government of Indonesia had not issued a circular letter about AI. After the Circular Letter of the Minister of Communication and Information Technology Number 9 of 2023 on Artificial Intelligence Ethics was implemented, 23 incidents were recorded in 2024.

The monitoring team categorized the 29 incidents into six types based on their potential impact on human rights: autonomy & reputation impact (11 cases), physical impact, psychological impact (3 cases), economic and business impact (10 cases), human rights & civil liberties impact (9 cases), and social & cultural impacts (9 cases).

In the autonomy & reputation impact category, the monitoring team found a case in June 2024 where an anonymous communication channel in Telegram named Rahasia Mantan (Ex’s Secrets) advertised a service to obtain nude/pornographic photos & videos of people (girls under 18 and women without their consent). This was done by feeding images to a generative AI Deepfake tool and instructing it to alter the images. The participants gained access to the service via a Google Form link that was widely circulated on social media X.

In September 2024, a case in the economic and business impact category, related to intellectual property issues, involved prominent Indonesian authors such as Pramoedya Ananta Toer, Intan Paramaditha, and Eka Kurniawan. Their works were acquired without permission by a library that prominent Big Tech companies, such as Meta, used to train their AI systems.

Cases in the human rights & civil liberties impact category occurred several times in 2022 and 2023. In one instance, the electronic ticketing system (E-TLE) took action against a person due to an offence committed by another person who forged his license plate. After receiving a ticket from the police, the person clarified that the license plate was registered to his car, which had a different colour and specification from the car caught on camera using the fake plate. The person was also different from the person recorded in the system.

The monitoring team also grouped these AI incidents into 11 clusters by incident types: accuracy/reliability (9 cases), conduct/negligence resulting in crime (7 cases), authenticity/integrity (7 cases), safety and security (2 cases), accessibility (1 case), and business/work (2 cases).

In February 2023 and December 2024, a ride-hailing company was implicated in the transparency and accountability cluster. In one case, a driver and a customer quarrelled due to price differences in their respective applications. To avoid further conflict, the customer paid more than what was stated in their application and reported this incident to the company. It is not known whether the case has been resolved. In another case, drivers reported that it is increasingly challenging to get customers due to the gamified system, in which drivers must reach a certain level to get a certain number of customers. This system compelled drivers to work longer hours, totalling more than 12 hours a day.

Aside from incident types, the monitoring team also categorised the incidents based on their novelty, stage in the AI lifecycle, and type of technology. A total of 25 cases are novel, while 2 are ongoing cases, and another 2 are recurring cases. In terms of the AI lifecycle, the majority of cases, 19 cases occurred during deployment, 5 cases occurred at the output model stage, and the remaining 4 cases occurred at the data collection stage. In terms of type of technology, the monitoring team found that the most used technology was generative AI (15 cases), followed by natural language processing (NLP) or text analysis (6 cases), and identity recognition (5 cases).

Related actors acting as developers, deployers, and users are also included in the grouping of AI incidents. There were actors from big tech companies in 9 cases, law enforcement officials (2 cases), ride-hailing companies (2 cases), and long-distance ground transportation companies (2 cases).

Finally, the monitoring team linked AI incidents to relevant existing regulations and policies. The top five regulations include: Constitution Article 28 Paragraph 1 on Human Rights, Law No. 39/1999 on Human Rights, Law No. 27/2022 on Personal Data Protection, Law No. 1/2024 on ITE, and Law No. 1/2023 on Criminal Law (New Criminal Code).

There are approximately 12 existing regulations and policies that are relevant to analyse AI incidents in 2022-2024. The list is as follows:

  1. Article 28D paragraph (1) of the 1945 Constitution of the Republic of Indonesia (UUD 1945)
  2. Article 9, 17, 38 of Law No. 39/1999 on Human Rights (HAM)
  3. Article 243, 263, 264, 282, 495 of Law No. 1/2023 on Criminal Law (New Criminal Code/KUHP)
  4. Article 4, 7, 9, 10 of Law No. 44/2008 on Pornography
  5. Article 27, 28, 45 of Law No. 1/2024 on Information and Electronic Transactions (ITE)
  6. Article 4, 5, 14 of Law No. 12/2022 on Crime of Sexual Violence (TPKS)
  7. Article 10, 16, 65, 67 of Law No. 27/2022 on Personal Data Protection (PDP)
  8. Article 4 (3) & (8), 17 of Law. 8/1999 on Consumer Protection
  9. Article 16 of Law No. 40/2008 on Elimination of Racial and Ethnic Discrimination
  10. Article 9, 12 of Law No. 28/2014 on Copyright
  11. Article 9 (6) of the Minister of Research, Technology, and Higher Education Regulation No. 22/2022 on Book Quality Standards, Process Standards and Rules for Obtaining Manuscripts, and Process Standards and Rules for Publishing Books

Aside from the relevant regulations above, the monitoring team identified policy gaps that need to be addressed to anticipate future issues in AI use. A case in 2024 found that a renowned university in Indonesia suspected several applicants had used AI assistance in the university entrance exam. In this case, cheating in an educational context does not have concrete nationwide regulations yet, making its resolution contingent on particular rules within each academic institution.

The media monitoring report can be viewed in detail below.

This text was co-written by Marina Nasution, along with Siti Desyana and Debby Kristin, based on data collected by EngageMedia, and features graphic illustrations by Amry Al Mursalaat. This text was translated into English by Meivy Andriani.