Can we get AGI by scaling up architectures similar to current ones, or are we missing key insights?

It's an open question whether we can create AGI simply by increasing the amount of compute used by our current models ("scaling"), or if AGI would require fundamentally new model architectures or algorithmic insights.

Some researchers have formulated empirical scaling laws as an attempt to formalize the relationship between the compute requirements and the capabilities of AI models.

For a variety of opinions on this question, see:



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.