AI is a once-in-a-generation technology shift, potentially on par with electricity as a general-purpose tool that will transform lives across the globe. The UK Government recognises that AI will “fundamentally alter the way we live, work and relate to one another … [promising] to further transform nearly every aspect of our economy and society, bringing with it huge opportunities but also risks that could threaten global stability and undermine our values."
AI has developed rapidly due to the expansion of available training data, significant improvements in neural networks and growth in computational power by a factor of one hundred million in the past ten years. AI is increasingly efficient at using data and compute, and the more this expands the more powerful AI systems become.
AI now matches or surpasses human proficiency in many tasks, from near-perfect face and object recognition to real-time language translation. Advanced AI can produce original images, compose fluent text, develop code, and even predict protein structures. In areas like strategizing and creativity, once deemed uniquely human, there are notable advancements.
AI still has limitations. Generative AI text produces falsehoods with confidence (called ‘hallucinations’). Issues around biased and unfair outputs, security and privacy vulnerabilities and legal liability persist. Despite extensive research and investment, we do not have consumer-level autonomous driving. At present, we have relatively ‘narrow’ AI which performs well at fixed tasks. However, as the biggest technology companies have the means to significantly scale AI training, we are likely to see AI continue to achieve new capabilities and overcome limitations, with models becoming more efficient and easier and cheaper to build.
We may soon see increasingly autonomous AI agents which can strategise, divide goals into sub-tasks and access their environment to take actions. Experimental projects such as AutoGPT - while not yet fully effective – aim to connect chatbots up with web browsers and word processors to carry out sub-tasks autonomously. Prominent AI industry figure Mustafa Suleyman sees the next milestone as ‘Artificial Capable Intelligence’, achieved when AI can “go make $1 million on a retail web platform in a few months with just a $100,000 investment.” When this is possible, implications will be broader than just financial and the new powers this will give different actors come into view.
Where might we be heading? The explicit aim of the leading AI companies is to build Artificial General Intelligence (AGI), systems that can “match or exceed human abilities in most cognitive work”. As AI gets better at automating tasks like programming and data collection, we might be surprised at how quickly it advances. Some leading experts believe that AGI will be achieved as soon as 2030, however both its feasibility and exact timelines are fiercely debated by experts and researchers. What is broadly agreed within AI industry and research communities is that conversations on safety and control are essential as AI progress sees systems becoming more autonomous. This raises risks around malicious actors setting harmful objectives and AI systems pursuing goals not aligned with human interests.
Benefits and risks from AI
As AI enhances human intelligence, we are likely to see spectacular breakthroughs. What are we already seeing?
- Healthcare improvements. AI has helped with diagnoses (e.g., increasing detection of diabetes), and developing cures (e.g., identifying a new drug for liver cancer).
- Mitigating climate change. AI can enable better flood forecasting, help predict wildfires and track deforestation.
- Development gains. AI can improve agricultural decision-making and increase yields, contributing to food security and economic development.
- Access to personalised education. AI gives potential for more personalised tuition, can expand education services to remote areas and upskill workers across sectors.
- Cultural impacts. AI is helping to preserve dying languages.
- Economic benefits. Generative AI alone is predicted to add up to $4.4 trillion to the global economy.
While the transformative benefits of AI are increasingly evident, there is also growing evidence of the harms it can cause and potential risks ahead. What are we seeing?
- Biases and discriminatory outcomes. Uses of AI for purposes such as predictive policing has been seen to reinforce societal discrimination. As foundation models are trained on vast amounts of unstructured data, outputs have mirrored biases in the data and demonstrate gender, racial and cultural stereotypes and prejudices.
- Cybercrime. There have been cases of AI generated voices deceiving the public and overriding bank security checks.
- Synthetic and ‘deep fake’ media. AI has been used to create faked audio and video of prominent individuals and non-consensual intimate imagery.
- Security threats. There is evidence of people ‘jailbreaking’ LLMs, removing the training safeguards that prevent dangerous use cases. In this example, GPT-4 gave advice on planning terrorist attacks when asked in languages such as Scots Gaelic or Zulu.
- Infringement of copyright. Cases include AI companies being sued for training models with copyright material and reproducing protected material in outputs.
- Exploitation of workers. Public AI products require extensive human input to ensure the quality and safety of outputs, resulting in workers experiencing psychological trauma from reading and viewing graphic content, low pay and poor working conditions.
- Intrusive surveillance. There are examples of facial recognition and other technologies that monitor, track and record activities of individuals being used in public places without scrutiny.
- Job losses. Certain professions such as software development already face automation, and Goldman Sachs estimates that generative AI could expose 300 million jobs worldwide to automation.
The AI Safety Summit has a focus on risks from frontier AI. What does this mean?
Misuse by malicious actors. As frontier AI systems become more advanced and autonomous, this amplifies risks of their use by malicious actors for cyber-attacks, producing novel bioweapons or chemical weapons, social manipulation and deception. Restricting these uses will be very difficult, especially as AI tools are made publicly available through ‘open source’ models.
Loss of control of AI systems. If highly autonomous AI is built, we risk AI systems pursuing goals not aligned with human interests. At present, there is no way to align AI behaviour with human values and ongoing questions over how these values are defined. Risk also arises from AI developers pursuing frontier systems in a bid to outcompete others, neglecting safety testing and human oversight.