Why is it called The Bletchley Declaration?

Bletchley Park, where the summit was held, is famous for its role during World War II as the central site for British codebreakers and the birthplace of modern computer science. It was the place where the German Enigma code was famously decrypted, and it played a critical role in Allied intelligence efforts during the war.

In simple terms, this declaration is saying that Artificial Intelligence (AI) has the potential to bring a lot of good to the world, but it also comes with risks. AI can be used to improve our lives in many ways, like in healthcare, education, and transportation. However, there are concerns about how AI might be misused, and how it can impact things like human rights and privacy.

The declaration emphasizes the need for AI to be developed and used in a safe and responsible way. It also highlights the importance of cooperation between countries to address the challenges and risks associated with AI. There is a particular focus on the risks posed by advanced AI systems that have the potential to cause harm, especially in areas like cybersecurity and biotechnology.

The declaration calls for collaboration between nations, organizations, and researchers to ensure that AI is used for the benefit of all and that its risks are properly managed. It stresses the importance of transparency, accountability, and safety testing when developing advanced AI systems.

The Bletchley Declaration is a call for international cooperation to harness the positive potential of AI while addressing the potential risks, especially from highly capable AI systems. It aims to ensure that AI benefits everyone and is used responsibly.

It’s important to note that the impact of AI can vary greatly depending on how it’s developed, regulated, and integrated into society. Efforts are being made to harness AI’s potential for good while mitigating its risks and drawbacks.

Good (Benefits of AI):

  1. Efficiency: AI can automate repetitive tasks, increasing efficiency and productivity. For example, in manufacturing, AI-driven robots can assemble products at a much faster rate than humans.
  2. Accuracy: AI can perform tasks with a high degree of accuracy, reducing human errors. In healthcare, AI can help doctors make more accurate diagnoses by analyzing vast amounts of medical data.
  3. 24/7 Availability: AI systems can work around the clock without getting tired. This is valuable in customer service, where chatbots can provide assistance at any time.
  4. Data Analysis: AI can quickly process and analyze large datasets, helping businesses make data-driven decisions and uncover patterns or insights that might be missed by humans.
  5. Personalization: AI can customize experiences for users. For example, streaming services use AI to recommend content based on your viewing history.
  6. Environmental Conservation: AI is playing a crucial role in environmental sustainability. It helps in monitoring and managing environmental resources more efficiently and predicting weather patterns. Smart grid systems use AI to optimize energy distribution, reducing waste and promoting the use of renewable energy sources, which benefits the environment and reduces energy costs.

Bad (Drawbacks and Risks of AI):

  1. Job Displacement: Automation driven by AI can lead to job loss in certain industries, particularly for tasks that can be easily automated. Workers in manufacturing and customer service might be at risk.
  2. Bias and Discrimination: AI algorithms can inherit biases present in their training data, leading to unfair or discriminatory outcomes. This is a concern in areas like hiring, lending, and law enforcement.
  3. Privacy Concerns: AI systems can collect and analyze vast amounts of personal data, raising concerns about privacy. For instance, smart home devices might record conversations without consent.
  4. Security Threats: AI can be used maliciously, such as for cyberattacks. Hackers can employ AI to find vulnerabilities and launch more sophisticated attacks.
  5. Loss of Control: Highly autonomous AI systems may make decisions that humans don’t fully understand or cannot control. This is a concern in critical domains like autonomous vehicles or military applications.
  6. Economic Inequality: While AI can bring economic benefits, there’s a risk that these benefits will be concentrated in the hands of a few, exacerbating economic inequality.
  7. Ethical Dilemmas: AI raises ethical questions about the use of advanced technologies, such as autonomous weapons or deepfake technology that can create convincing fake videos.

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.

AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.


Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them.

The Bletchley Declaration.

Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI. In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. With regard to the specific risks most likely found in relation to frontier AI, we resolve to intensify and sustain our cooperation, and broaden it with further countries, to identify, understand and as appropriate act, through existing international fora and other relevant initiatives, including future international AI Safety Summits.

All actors have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together. Noting the importance of inclusive AI and bridging the digital divide, we reaffirm that international collaboration should endeavour to engage and involve a broad range of partners as appropriate, and welcome development-orientated approaches and policies that could help developing countries strengthen AI capacity building and leverage the enabling role of AI to support sustainable growth and address the development gap.

We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks.

In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:

  • identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.

In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024.

The Countries That Attended The Summit.

  • Australia
  • Brazil
  • Canada
  • Chile
  • China
  • European Union
  • France
  • Germany
  • India
  • Indonesia
  • Ireland
  • Israel
  • Italy
  • Japan
  • Kenya
  • Kingdom of Saudi Arabia
  • Netherlands
  • Nigeria
  • The Philippines
  • Republic of Korea
  • Rwanda
  • Singapore
  • Spain
  • Switzerland
  • Türkiye
  • Ukraine
  • United Arab Emirates
  • United Kingdom of Great Britain and Northern Ireland
  • United States of America

References to ‘governments’ and ‘countries’ include international organisations acting in accordance with their legislative or executive competences.