Google DeepMind has released a comprehensive plan to ensure the safe development of Artificial General Intelligence (AGI). The company defines AGI as AI systems that match or exceed human capabilities in most cognitive tasks, potentially revolutionizing healthcare, education, science and other sectors.
One of the primary concerns is preventing the misuse of AGI, such as spreading disinformation or manipulating public discourse. To address this, Google DeepMind has introduced a cybersecurity evaluation framework to identify and limit dangerous capabilities. The company is also focusing on ensuring AGI systems align with human values and intentions, avoiding scenarios where AI pursues goals that deviate from human intent.
To achieve this, Google DeepMind is developing a multi-layered strategy to recognize uncertainty, block questionable actions and escalate decisions when necessary. This includes limiting certain AI features to prevent misuse, training AI systems to ignore harmful requests, restricting access to advanced AI features for trusted users and use cases, and testing the effectiveness of safety measures.
The development of AGI is expected to have a significant impact on various industries, including search and SEO, content creation, advertising and personalization. Improved safety measures may change how search engines work, prioritizing quality content that aligns with human values. Advanced AI content generators will offer smarter output with built-in safety rules, and stricter safety checks may limit persuasion techniques in ad targeting and personalization.
Google DeepMind's CEO, Demis Hassabis, estimates that early AGI systems could emerge within five to ten years, with 2030 as a possible date for "powerful AI systems" to appear. However, this estimate carries significant uncertainty, and the company is working to ensure that AGI is developed in a responsible and safe manner.