What is compute governance?

Compute governance is a type of AI governance that focuses on controlling access to the computing hardware needed to develop and run AI. It has been argued that regulating compute is particularly promising compared to regulating other inputs to AI progress, such as data, algorithms, or human talent.

Although compute governance is one of the more frequently proposed strategies for AI governance, as of November 2024, there are few policies in place for governing compute, and much of the research on the topic is exploratory. Currently-enforced measures related to compute governance include US export controls on advanced microchips to China and reporting requirements for large training runs in the US and EU.

According to Sastry et al., compute governance can be used toward three main ends:

  1. Visibility is the ability of policymakers to know what’s going on in AI, so they can make informed decisions. The amount of compute used for a training run can be used as information about the capabilities and risk of the resulting system. Measures to improve visibility could include:

    1. Using public information to estimate compute used.

    2. Requiring AI developers and cloud providers to report large training runs.

    3. Creating an international registry for AI chips.

    4. Designing systems to monitor general workload done by AI chips while preserving privacy about sensitive information.

  2. Allocation refers to policymakers influencing the amount of compute available to different projects. Strategies in this category include:

    1. Making compute available for research toward technologies that increase safety and defensive capabilities, or that substitute for more dangerous alternatives.

    2. Speeding up or slowing down the general rate of AI progress.

    3. Restricting or expanding the range of countries or groups with access to certain systems.

    4. Creating an international megaproject aimed at developing AI technologies — such proposals are sometimes called “CERN for AI”.

  3. Enforcement is about policymakers ensuring that the relevant actors abide by their rules. This could potentially be enabled by the right kind of software or hardware; hardware-based enforcement is likely to be harder to circumvent. Strategies here include:

    1. Restricting networking capabilities to make chips harder to use in very large clusters.

    2. Modifying chips to add cryptographic mechanisms to automatically verify or enforce restrictions on what types of tasks these chips are allowed to be used for.

    3. Designing chips so that they can be controlled multilaterally, similar to “permissive action links” for nuclear weapons.

    4. Restricting access to compute through, for instance, cloud compute providers.

Many of these mechanisms are speculative and would require further research before they could be implemented. They could end up being risky or ineffective. However, many safety researchers think compute governance would help avert major existential risks to humanity.

Further reading:



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.