How Did South Korea Become the First Nation to Implement Comprehensive AI Safety Laws?
Synopsis
Key Takeaways
- South Korea is the first nation to implement comprehensive AI safety laws.
- The AI Basic Act regulates misinformation and deepfake content.
- Companies are held accountable for AI-generated content.
- High-risk AI models must inform users of their AI basis.
- Fines for violations can reach 30 million won.
Seoul, Jan 22 (NationPress) South Korea has officially enacted a comprehensive law aimed at ensuring the safe usage of artificial intelligence (AI) models, becoming the first nation in the world to accomplish this feat. This law establishes a regulatory framework to combat misinformation and other potential dangers associated with the rapidly evolving sector.
The Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness, commonly referred to as the AI Basic Act, came into force on Thursday, as reported by the science ministry and the Yonhap news agency.
This momentous legislation represents the first time a government has adopted extensive guidelines for AI usage on a global scale.
Central to the act is the requirement for companies and AI developers to take increased accountability for tackling deepfake content and misinformation that AI models can produce, allowing the government to impose fines or conduct investigations into violations.
Specifically, the act introduces the concept of “high-risk AI,” which pertains to AI models that generate content significantly influencing users’ everyday lives or safety, including areas like employment, loan evaluations, and medical advice.
Entities utilizing such high-risk AI models must notify users that their services are AI-based and must ensure safety measures are in place. Additionally, AI-generated content must include watermarks to indicate its nature.
“Implementing watermarks on AI-generated content is a fundamental safeguard to mitigate negative consequences from the misuse of AI technology, such as deepfake content,” stated a ministry representative.
International companies offering AI services in South Korea that meet specific criteria—annual global revenue of 1 trillion won (approximately $681 million), domestic sales of 10 billion won or more, or a minimum of 1 million daily users—are mandated to appoint a local representative.
Currently, OpenAI and Google fit these criteria.
Violations of the act could result in fines reaching up to 30 million won, with the government planning a one-year grace period for penalties to aid the private sector in adapting to the new regulations.
The act also includes provisions for the government to promote the AI industry, requiring the science minister to present a policy strategy every three years.