Part of a series on |
Artificial intelligence |
---|
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.[1] Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
Creating AGI is a primary goal of AI research and of companies such as OpenAI[2] and Meta.[3] A 2020 survey identified 72 active AGI research and development projects across 37 countries.[4]
The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades; others maintain it might take a century or longer; a minority believe it may never be achieved; and another minority claims that it is already here.[5][6] Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect.[7]
There is debate on the exact definition of AGI and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI.[8] AGI is a common topic in science fiction and futures studies.[9][10]
Contention exists over whether AGI represents an existential risk.[11][12][13] Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority.[14][15] Others find the development of AGI to be too remote to present such a risk.[16][17]
ANI is designed to perform a single task.
Our mission is to ensure that artificial general intelligence benefits all of humanity.
Our vision is to build AI that is better than human-level at all of the human senses.
72 AGI R&D projects were identified as being active in 2020.
It is hard to see how you can prevent the bad actors from using it for bad things.
GPT-4 shows sparks of AGI.
All that you touch you change. All that you change changes you.
The Singularity is coming.
The real threat is not AI itself but the way we deploy it.
AGI could pose existential risks to humanity.
The first superintelligence will be the last invention that humanity needs to make.
Mitigating the risk of extinction from AI should be a global priority.
AI experts warn of risk of extinction from AI.
We are far from creating machines that can outthink us in general ways.
There is no reason to fear AI as an existential threat.