Are there any plausibly workable proposals for regulating or banning dangerous AI research?

  • AI is a dual use technology. There's no easy way to separate out good uses from bad uses.

  • The benefits are so large that the incentive to push forward is enormous.

  • This is not a mistake. If you have to potential to cure all diseases and solve climate change etc., people are going to want to push for that.

  • So getting an agreement that we're all going to ignore those large rewards and not do that seems... unlikely.

  • Regulation might be possible, but a ban is basically a non-starter.

  • There is a field of AI policy about this.