About us

We are a small, non-profit campaign of people concerned about extinction risk from AI.
Andrea Miotti (also working at Conjecture) is director of the campaign. Given that this plays against the interests of companies with billions of dollars, the funders wish to remain private for now.

The most urgent step towards a safer future is to create concrete and effective policy inhibiting the free creation of godlike AI. Creating godlike is a key aim of the three largest and most funded AI companies (OpenAI, Anthropic, and DeepMind) even though there is currently no understanding of how complex AI systems work, and even less understanding of how to align the values of those systems with human values.

There is a real possibility that godlike AI could exist in 2 or 3  years, but even the top AI companies admit there is no clear method via which it could be controlled or made safe. Without methods to control AI or ensure its safety, it is estimated that there is a 10-25% chance it will cause catastrophe. Despite these concerns, the top three AI companies are locked in competition, racing towards AGI and receiving billions of dollars of funding to do it.

Email: contact@controlai.com

The 2023 AI Safety Summit on November 1-2 in the UK is the greatest chance yet to challenge the reckless race to create godlike AI.

Despite this, at the conference Anthropic plans to push through, so-called, “Responsible Scaling”.

“Responsible Scaling” is nothing more than a dressed-up plea to continue the race, minimise restrictions, and to allow AI Companies to regulate themselves when creating the most dangerous technology in human history. We want to stop so-called “Responsible Scaling” from becoming the policy of any government or regulatory body.

Responsible Scaling won’t work. But there are robust, definitive alternatives that can avert catastrophe and lead to a safer future.

We are campaigning for the creation of a Multinational Artificial General Intelligence Consortium (MAGIC). MAGIC would be an institution built on international collaboration, and the only institution in the world permitted to create advanced AI. By restricting the ability to create AGI to one international body, it can be ensured that AI is created safely, in a secure environment.


If the summit adopts “Responsible Scaling”, AI companies will continue the reckless race to create godlike AI. They will do so without restrictions or oversight, and with the ability to drop any safety measures whenever they choose. At the end of the race could lie unfathomable catastrophe.  Humanity deserves better.

Join our newsletter

Sign up to our newsletter to receive all our latest updates and blog posts directly in your email inbox.

Sign up now

Join our newsletter

Sign up to our newsletter to receive all our latest updates and blog posts directly in your email inbox.

Sign up now

Join our newsletter

Sign up to our newsletter to receive all our latest updates and blog posts directly in your email inbox.

Sign up now