#14 Asia AI Policy Monitor
Thanks for reading this month’s newsletter along with over 1,600 other AI policy professionals across multiple platforms to understand the latest regulations affecting the AI industry in the Asia-Pacific region.
*** Recently, we provided insight about the shortages of labor and energy for AI in Japan for an AFP article. This follows previous media mention of our publication in The Economist where we highlighted the danger of AI anthropomorphization concealing AI harms.***
Do not hesitate to contact our editors if we missed any news on Asia’s AI policy at seth@apacgates.com!
--- Some block quotations are truncated in LinkedIn - so please read us on Substack for the full newsletter#14 Asia AI Policy Monitor.---
Privacy
The Office for Australian Information Commission (OAIC) publishedguidance on use of biometric data and facial recognition technology.
As part of this privacy by design approach, it is expected that key principles will be explored to support the appropriate use of sensitive information when using FRT, including:
The OAIC also found a company in violation of Australian’s privacy rules by collecting and using facial recognition technology.
“We acknowledge the potential for facial recognition technology to help protect against serious issues, such as crime and violent behaviour. However, any possible benefits need to be weighed against the impact on privacy rights, as well as our collective values as a society.
Intellectual Property
Publishers in India sued OpenAI for infringement based on the use of their data in training the LLM. This marks the first litigation on AI training and copyright infringement in Asia, while other strategic litigation goes on in the US and Europe. Our editors at Digital Governance Asia have written on the issue in Asia for Tech Policy Press recently.
[Claimants] also alleged that OpenAI stored the data, which is in breach of intellectual property rights, while using its content and data to train their Large Language Model. The [claimants] are seeking initial damages of ₹20 million ($236,910), its lawyer Sidhant Kumar said.
Finance
Bank of Japan Governor noted the financial risks of AI in recent statements.
“As financial services grow more diverse and complex, the channels of risk transmission have become less transparent, and current financial regulations may not be fully equipped to manage new types of financial services,” Ueda said.
Environment
The Hinrich Foundation published a paper on the growing power and space constraints placed on the environment by increasing investment in internet connectivity and data centers to power AI. The paper emphasizes the impact on Southeast Asia in particular.
In Southeast Asia, the rollout of new internet infrastructure has double-edged impacts. Massive investments in renewable and clean energy to power data centers are coming online and can be expected to help underwrite a strengthening of the electricity grid, thereby creating spillover benefits. Geopolitical factors weighing on the locations of subsea cables could benefit the region and increase the importance of hubs like Singapore. They could also turn just as quickly into a geopolitical minefield.
Trust, Safety and Community
South Korea’s Communication Commission investigated Telegram for distribution of deepfake pornography.
Earlier, on November 7, the KCC noted that most deepfake sexual crime materials have recently been distributed through Telegram, and requested that Telegram designate a youth protection manager and reply with the results in order to induce Telegram to strengthen its self-regulation. Accordingly, Telegram designated a youth protection manager and notified them within two days, replied with a hotline email address for administrative communication, and responded to an email sent to confirm that the hotline email address was functioning normally with a response within four hours that they would “cooperate closely,” the KCC said.
Cybersecurity and Military
UK Minister announces at NATO conference the launch of an AI lab to fight cyber threats, in particular from China, Russia, North Korea and Iran.
Last year, we saw the United States. for the first time, publicly call out a state for using AI to aid its malicious cyber activity.
Microsoft Threat Intelligence released a report on North Korea’s use of AI to support illicit activities, including AI to build a repository of fake resumes and LinkedIn accounts to gain access to organizations as IT workers:
[North] Korea (DPRK) has successfully built computer network exploitation capability over the past 10 years and how threat actors have enabled North Korea to steal billions of dollars in cryptocurrency as well as target organizations associated with satellites and weapons systems. Over this period, North Korean threat actors have developed and used multiple zero-day exploits and have become experts in cryptocurrency, blockchain, and AI technology.
Multilateral
China, Microsoft and the UN’s International Labor Organization collaborated on how to use AI to foster vocational training.
On 5 November 2024, the ILO project Quality Apprenticeship and Lifelong Learning in China Phase 2 successfully organized the AI for Better TVET webinar in partnership with Microsoft, kicking off the AI-VIBES Series (AI for Vocational Instructors Boosting Education and Skills) which builds the capacity of TVET teachers and in-company trainers.
Singapore and the EU signed an Administrative Arrangement on AI safety. The arrangement covers 6 issue areas:
a. Information Exchange: Sharing expertise of information and best practices on aspects of AI safety, including relevant technologies, governance frameworks, technical tools and evaluations, and research and development, for the purpose of advancing AI safety. These may include expert information exchange with related partners, such as institutes of higher learning, think-tanks, enterprises, government agencies, or other ecosystem stakeholders.
The G20’s (members from Asia are China, India, Indonesia, Japan, South Korea, Australia) Leadership Statement included provisions on AI:
We recognize the role of the United Nations, alongside other existing fora, in promoting international AI cooperation, including to empower sustainable development. Acknowledging growing digital divides within and between countries, we call for the promotion of inclusive international cooperation and capacity building for developing countries in this domain and welcome international initiatives to support these efforts. We reaffirm the G20 AI principles and the UNESCO Recommendation on Ethics of AI…
China announced a plan to support the Global South in AI at the sidelines of the G20.
China, along with Brazil, South Africa and the African Union, were launching an "Open Science International Cooperation Initiative" designed to funnel scientific and technological innovations to the Global South.
AI Safety Institutes (AISI) from the Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, the United Kingdom, and the United States met in San Francisco this month for the first meeting of the international network of AISIs. The meeting resulted in four deliverables:
The International Network Members have also aligned on four priority areas of collaboration: pursuing AI safety research, developing best practices for model testing and evaluation, facilitating common approaches such as interpreting tests of advanced AI systems, and advancing global inclusion and information sharing.
China’s absence from the International Network of AISI meeting raises important questions about whether the country has a central stakeholder dealing with AI safety. This report from the AI Institute for Policy and Strategy identifies several organizations in China that may play such a role.
The OECD (members from Asia include, Australia, Japan, South Korea, and New Zealand) published a paper on assessing potential future AI risks, benefits and policy priorities.
The Expert Group… put forth ten policy priorities to help achieve desirable AI futures:
The WTO published a report on AI and trade. Case studies are provided by Singapore:
For example, Singapore has leveraged AI in order to continue to act as a global hub facilitating trade and connectivity. Singapore’s Changi Airport, which handled more than 59 million travellers last year, uses AI to screen and sort baggage, and to power facial recognition technology for seamless immigration clearance. The Port of Singapore, which handled cargo capacity of 39 million twenty-foot equivalent unit (TEUs) in 2023, uses AI to direct vessel traffic, map anchorage patterns, coordinate just-in-time cargo delivery, process registry documents, and more.
In the News & Analysis
Stanford’s Human-Centered AI published the AI Vibrancy Index. Half of the top 10 countries are in Asia (in order - (2)China, (4)India, (7)South Korea, (9)Japan and (10)Singapore):
Australia’s Strategic Policy Institute published an article urging the government to regulate AI to contain risks for bioterrorism.
Both generative AI, such as chatbots, and narrow AI designed for the pharmaceutical industry are on track to make it possible for many more people to develop pathogens. In one study, researchers used in reverse a pharmaceutical AI system that had been designed to find new treatments. They instead asked it to find new pathogens. It invented 40,000 potentially lethal molecules in six hours. The lead author remarked how easy this had been, suggesting someone with basic skills and access to public data could replicate the study in a weekend.
Singapore Minister of Digital Development Teo explained the country’s interest in fostering practical AI tools at a recent industry conference:
There are many countries that would like to gain leadership in AI, for example, by making sure that they are involved in the development of the most cutting-edge, frontier models. We adopt a different approach. The approach that we want is, to put emphasis on enterprise use. We want to see many use cases being experimented upon, and it is this emphasis on having real activities, with companies and organisations benefiting from the use of AI tools
Advocacy
South Korea is conducting a survey regarding regulatory sandbox methods for data-intensive industries until 13 December.
Japan’s Fair Trade Commission opened a public comment period until 22 November on Generative AI Market Dynamics and Competition:
Given the rapidly evolving and expanding generative AI sector, the JFTC has decided to publish this discussion paper to address potential issues and solicit information and opinions from a broad audience. The topics outlined in this paper aim to contribute to future discussions without presenting any predetermined conclusions or indicating that specific problems currently exist. The JFTC seeks insights from various stakeholders, including businesses involved in different layers of generative AI markets (infrastructure, model, and application layers as described in Section 2), industry organizations, and individuals with knowledge in the generative AI field.
Sri Lanka’s National AI Strategy is open for consultation until 6 Jan 2025.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.
Asia AI Policy Monitor™ is free. Let us know if you want to contribute or support the network. To submit articles, or join our network of policymakers, and analysts, please email our editor at seth@apacgates.com. To support financially, click below.