Wouldn't a superintelligence be slowed down by the need to do physical experiments?

A superintelligence will be able to carry out theoretical reasoning at millions of times human speed, while real-world experimentation can be a lot slower. While this might make experimentation a limiting factor, we should note:

  • Experiments can often be done in approximating simulations, where they don't depend on fine-grained physics that is unknown or impractically computationally expensive.

  • Theory and experiment can substitute for each other to some extent. Just because it takes humans a certain amount of experimentation to make an advance, that doesn't mean that an AI would require the same amount, if it were vastly more intelligent. Humans often use far less than the maximum possible information they can extract from an experiment. In many cases, the information needed to be confident in a hypothesis isn’t much more than the information needed to notice the hypothesis as a possibility in the first place (e.g., general relativity was already a good explanation of existing physics before being specifically experimentally confirmed).

  • Experiments at the nanoscale can be extremely fast because the distances involved are so short.

  • A superintelligence that operates efficiently can perform many experiments in parallel (when knowing which experiments to perform doesn't require the results of a long chain of previous experiments).

  • Being able to develop theory much faster means a superintelligence can search through the entire tree of possible technological advances and select whichever path requires the least experimentation, even if that isn't the path humans would have chosen.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.