The True AI Revolution is Ahead, and Regulations Could Stifle Innovation: Insights from Meta's AI Leader

Seoul, Dec 11 (NationPress) Yann LeCun, the leading artificial intelligence (AI) scientist at the renowned tech company Meta Platforms, stated that the 'true AI revolution' is still on the horizon, and he urged governments not to implement regulations that could obstruct technological advancement.
'The true AI revolution has not yet materialized,' LeCun remarked during his keynote address at the 2024 K-Science and Technology Global Forum in Seoul, organized by South Korea's science ministry, according to reports from the Yonhap news agency.
'In the near future, every interaction we have with the digital realm will be facilitated by AI assistants ... and ultimately, we require systems that possess a level of intelligence akin to humans,' he pointed out.
A pioneer in modern AI, LeCun noted that generative AI technologies built on large language models (LLMs), like OpenAI's ChatGPT and Meta's Llama, face challenges in their understanding of the physical world, as well as in reasoning and planning comparable to human abilities.
LeCun explained that while LLMs can manage language due to its straightforward and discrete nature, they struggle with the complexities of the real world.
To address these limitations, Meta is developing an objective-driven AI that utilizes a novel architecture, enabling it to comprehend the physical world through observation, similar to how infants learn, and to make predictions based on this understanding.
He also emphasized the significance of an open-source AI ecosystem to develop AI models that grasp various languages, cultural contexts, and value systems worldwide.
'We cannot rely on a single organization located on the west coast of the United States to train these models,' he stated, advocating for a collaborative approach to AI training on a global scale.
The AI authority cautioned that 'regulation could stifle open source,' urging governments to refrain from hastily enacting laws that may impede technological progress. 'There is no evidence that any AI system is inherently hazardous,' he concluded.