PauseAI logo
Risks Proposal Protests FAQ Take Action Join
  • PauseAI protest @ The Hague, Netherlands - August 11th

    We are organizing a protest to demand a pause on dangerous AI development.

  • PauseAI protest @ FCDO, London, July 13th
  • PauseAI protest @ FCDO, London, July 18th
  • (Cancelled) PauseAI protest @ United Nations, NYC, July 18th
  • PauseAI protest @ Office for AI - June 29th
  • PauseAI protest @ Parliament Square - June 8th

    We are organising a protest at Parliament Square to demand a summit to pause AI development.

  • PauseAI protest @ Melbourne - June 16th

    Join PauseAI for an upcoming peaceful protest at the Melbourne Convention and Exhibition Centre (MCEC) where Sam Altman will be having a talk in Melbourne.

  • PauseAI protest @ Google DeepMind - May 19th - 22nd

    We are organising a protest at Google DeepMind to demand a summit to pause AI development.

  • PauseAI protest @ Bletchley Park - November 1st

    We are organising a protest at Bletchley Park, during the AI Safety Summit

  • International PauseAI protest 21st October 2023

    We are organizing an international protest to demand a pause on dangerous AI development.

  • PauseAI / No AGI Protest @ OpenAI San Francisco - February 12th, 2024

    We are organizing a protest to demand a pause on dangerous AI development.

  • 4 Levels of AI safety regulation

    A framework for thinking about how to mitigate the risks from powerful AI systems

  • Take action

    Ways to help out with pausing AGI development.

  • Why an AI takeover could be very likely

    As AI surpasses human capabilities, the likelihood of an AI takeover becomes very high.

  • PauseAI protest @ Microsoft Brussels - May 23rd, 2023

    We are organizing a protest at Microsoft to demand a summit to pause AI development.

  • Rebutting skeptical arguments about AI existential risks

    Why AI existential risks are real and deserve serious attention

  • Cybersecurity risks of AI

    How AI could be used to hack all devices.

  • Regulating dangerous capabilities in AI

    The more powerful AI becomes in specific domains, the larger the risks become. How do we prevent these dangerous capabilities from appearing or spreading?

  • FAQ

    Frequently asked questions about PauseAI and the risks of superintelligent AI.

  • Implementing a Pause internationally - addressing the hard questions

    What would an AI Pause look like? How do you continue to actually prevent a superintelligence from being created?

  • Join PauseAI

    Sign up to join the PauseAI movement.

  • Learn why AI safety matters

    Educational resources (videos, articles, books) about AI risks and AI alignment

  • Tips for effective lobbying

    How to convince your government that it needs to work towards a pause on AI training runs

  • Pausing AI development might go wrong. How to mitigate the risks?

    This article addresses some of the risks of pausing AI development, and how to mitigate them.

  • PauseAI candlelit vigil @ UN HQ NYC, 3rd of June
  • Offense / Defense balance in AI safety

    How to think about the balance between offense and defense in AI safety

  • San Francisco's 3-Day Picket: Demanding Pause on Advanced AI Development Near OpenAI
  • Organizing a PauseAI protest

    It's not very hard!

  • Polls & surveys on AI governance, safety and risks

    How much do regular people and experts worry about AI risks?

  • PauseAI Proposal

    Implement a temporary pause on the training of AI systems more powerful than GPT-4, ban training on copyrighted material, hold model creators liable.

  • PauseAI Protesters Code of Conduct
  • PauseAI Protests

    When and where we will be protesting.

  • The difficult psychology of existential risk

    Thinking about the end of the world is hard.

  • Risks of artificial intelligence

    AI threatens our democracy, our technology, and our species.

  • Concrete scenarios for catastrophic AI risks

    How superintelligent AI could cause human extinction.

  • State-of-the-art AI capabilities vs humans

    How smart are the latest AI models compared to humans?

  • Towards the next AI Safety Summit (Seoul 2024)

    Why we need the AI safety summit to happen, and what it should achieve.

  • Quotes

    Quotes about risks from artificial intelligence

  • List of p(doom) values

    How likely do AI various researchers believe AI will cause human extinction?

  • PauseAI Local Communities

    A map of all the local PauseAI communities and people around the world. Also shows adjacent AI Safety communities.

  • Email Builder

    A web app to help you write an email to a politician. Convince them to Pause AI!

  • Why we might have superintelligence sooner than most think

    We're underestimating the progress of AI, and there is a small but realistic chance that we are very close to a superintelligence.

  • How to write a letter or email to someone in power

    A guide on how to write a lobby letter

  • Writing press releases for protests

    How to be effective at writing a press release to cover a protest.

  • The existential risk of superintelligent AI

    Why AI is a risk for the future of our existence, and why we need to pause development.

  • AI Outcomes

    What will happen if we continue to build AI?

Join PauseAI >

Info

FAQ Proposal Learn State of AI Psychology of x-risk 4 Levels of regulation Merchandise

Risks

AI Outcomes Risks overview Existential risk AI takeover Cybersecurity Urgency Capabilities

Take Action

Join PauseAI Local communities How you can help Protests Organize a protest Write a letter Lobby tips

Socials

Discord
Twitter
LinkedIn
Facebook
TikTok
Instagram
YouTube

Other

Edit this page
All pages RSS License: CC-BY 4.0