The Paris AI Action Summit was a significant event for international cooperation on artificial intelligence. It brought together world leaders, industry experts, and academics to establish a global direction for AI’s responsible and inclusive development. Sixty countries, including France, China, India, Japan, Australia, and Canada, signed a declaration committing to AI principles that are “open, inclusive, transparent, ethical, safe, secure, and trustworthy.” Additionally, the declaration emphasised the importance of ensuring that AI development is sustainable for both people and the planet.

The UK, however, along with the US, refused to sign. The government defended its decision, stating that the declaration did not go far enough regarding practical governance measures and national security considerations. This response raises concerns about whether the UK is positioning itself as a true global leader in AI or merely following in the footsteps of US policy decisions that prioritise corporate interests over inclusive innovation.

The rejection of the Paris Declaration is particularly frustrating given the UK’s previous commitments to AI governance, such as the Bletchley Park AI Safety Summit and the release of the UK Government’s AI Playbook. These initiatives highlight the UK’s ambition to lead in AI, yet refusing to support an international commitment to open and inclusive AI seems counterproductive to this goal.

This decision also risks the UK losing credibility in global AI governance discussions. Campaigners and AI ethics groups have already expressed concern that failing to support a declaration focused on inclusivity and transparency could undermine the UK’s reputation as a country that champions responsible AI development.

Why Open and Inclusive AI is Essential

The refusal to sign the declaration also overlooks the critical role of open and inclusive AI in ensuring fair, equitable, and ethical technological progress. AI is no longer confined to research labs or large technology firms—it is already shaping economies, public services, and daily life. For AI to be truly beneficial, it must be transparent, accountable, and accessible.

Faster Innovation Through Transparent AI

Open-source and transparent AI modelling allow multiple contributors to improve and refine AI models, leading to faster development cycles and more efficient problem-solving. Unlike closed AI systems, which a few large technology firms control, open-source AI fosters a community-driven approach, where expertise from diverse fields can help refine AI solutions and prevent harmful biases from becoming embedded.

Wider Accessibility and Inclusivity

Closed-source AI creates an environment where only those with financial power can access cutting-edge models and deploy AI at scale. Open-source AI breaks down these barriers, allowing researchers, startups, and smaller organisations to leverage AI without excessive costs. This is particularly crucial for public sector organisations that seek to deploy AI for social good rather than purely commercial gain.

Transparency and Accountability

One of the key concerns around AI is the lack of transparency, particularly when it comes to decision-making in high-risk areas such as criminal justice, healthcare, and finance. Proprietary AI models often function as black boxes, making understanding how they arrive at decisions difficult. Open-source AI, by contrast, allows public scrutiny of training data, algorithms, and biases, making AI systems more accountable and trustworthy.

Community-Driven Security and Transparent AI as a Catalyst for Innovation

Critics of open and transparent AI often argue that it could stifle innovation by making proprietary models and techniques freely accessible, discouraging commercial investment. However, this view fails to consider how transparency accelerates progress and strengthens AI security.

Let’s look at the recent XZ Utils incident, in which a backdoor vulnerability in open-source software was discovered before it could be widely exploited. This case highlights that openness is not a weakness but a safeguard—the codebase’s transparency enabled independent scrutiny and early detection of the security threat.

By contrast, closed-source AI models introduce greater risks. Proprietary AI systems operate in secrecy, meaning biases, security flaws, and unethical practices can go unnoticed. Without external oversight, organisations deploying such models must trust that vendors have taken the necessary precautions—an assumption that history has repeatedly shown to be flawed. Beyond security, the genuine concern should be whether AI transparency hinders innovation. The evidence suggests that it does the opposite. Transparent AI fosters trust, collaboration, and a culture of continuous improvement, all of which drive faster and more ethical AI development.

Why Transparency Fuels, Rather than Stifles, AI Innovation

1. Building Trust for Wider AI Adoption

Trust is fundamental to the widespread deployment of AI across industries and public services. When businesses and governments understand how an AI system works, they are more likely to trust its outputs and integrate them into mission-critical decisions. This trust leads to greater adoption and, in turn, more opportunities for innovation.

2. Identifying Biases and Errors for Better AI Performance

A transparent AI system allows for greater scrutiny, making detecting biases, ethical concerns, and technical flaws easier. This results in higher-quality models that are more accurate, fair, and safe—a necessary condition for AI to reach its full potential.

3. Encouraging Collaboration and Open Innovation

Transparency invites collaboration between researchers, developers, and policymakers. When AI models are openly shared, ideas evolve faster, new applications emerge, and breakthroughs happen at a greater pace. Open-source projects such as TensorFlow and PyTorch have demonstrated how sharing knowledge and resources accelerates AI advancements rather than limiting them.

4. Balancing Transparency with Business Considerations

Some concerns about transparency impacting trade secrets and competitive advantage are valid. Companies investing heavily in proprietary AI models may be reluctant to share their methodologies. However, this does not mean AI development should be closed off entirely. A balanced approach—where core ethical principles, data provenance, and safety mechanisms are disclosed while specific proprietary techniques remain protected—is possible and necessary.

The Political and Economic Implications of Rejecting Inclusive AI

The refusal to sign the Paris Declaration aligns with a broader trend of tech protectionism, where countries—particularly the US—prioritise corporate interests over global collaboration. JD Vance, the US Vice President, made this clear in his speech at the Paris Summit, criticising European AI regulations for being too restrictive. His comments reflect a worrying tendency to prioritise rapid AI development over ethical considerations.

For the UK, aligning itself too closely with the US position could be detrimental in the long run. Britain risks losing control over its AI governance and technological sovereignty if it becomes overly reliant on American AI firms. The UK already lags in AI investment compared to the US and China, and a failure to embrace open AI ecosystems will only widen this gap.

In contrast, countries like France, Canada, and Australia position themselves as leaders in inclusive, ethical, and sustainable AI development. These nations recognise that open AI ecosystems fuel economic growth without monopolising power in the hands of a few tech giants. The UK’s decision to remain outside this coalition may isolate it from future AI governance frameworks.

AI is often discussed in grand theoretical terms rather than tackled with practical, implementable policies. While politicians make sweeping declarations about AI safety and innovation, they fail to address tangible issues such as data quality, algorithmic bias, and real-world AI accountability. This is particularly relevant in light of the UK’s refusal to sign the Paris Declaration. If the government is genuinely concerned about practical AI governance, then rejecting a commitment to transparency, inclusivity, and ethical development seems entirely contradictory. Instead of engaging with real concerns about AI safety and accessibility, the UK has chosen a path that leaves critical governance gaps unaddressed.

A Path Forward: The UK Must Champion Open and Ethical AI

The UK still has an opportunity to correct course. Rather than retreating from international collaboration, the government should:

  • Actively support open-source AI initiatives to ensure transparency, security, and inclusivity remain central to AI development.
  • Develop a national AI strategy prioritising public sector AI adoption, ensuring that AI is used ethically and effectively in healthcare, justice, and education.
  • Implement robust AI governance frameworks that prevent bias, misinformation, and ethical breaches while encouraging responsible innovation.
  • Work with global partners to create strong AI safety and accountability measures rather than rejecting agreements that seek to establish common standards.

The Paris AI Summit should have allowed the UK to lead global AI governance. Instead, it has chosen to distance itself from a progressive, open, and inclusive AI future. If the UK is serious about being a leader in AI, it must embrace transparency, accountability, and collaboration—not just in words but in action.

Insights

Explore Modular Data’s insight and expertise in creating value from your data.

Unlock the value in your data.

Learn how you could rapidly unlock value with a data product approach.