Artificial general intelligence

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks.[1] Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

Creating AGI is a primary goal of AI research and of companies such as OpenAI[2] and Meta.[3] A 2020 survey identified 72 active AGI research and development projects across 37 countries.[4]

The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades; others maintain it might take a century or longer; a minority believe it may never be achieved; and another minority claims that it is already here.[5][6] Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect.[7]

There is debate on the exact definition of AGI and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI.[8] AGI is a common topic in science fiction and futures studies.[9][10]

Contention exists over whether AGI represents an existential risk.[11][12][13] Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority.[14][15] Others find the development of AGI to be too remote to present such a risk.[16][17]

  1. ^ Krishna, Sri (9 February 2023). "What is artificial narrow intelligence (ANI)?". VentureBeat. Retrieved 1 March 2024. ANI is designed to perform a single task.
  2. ^ "OpenAI Charter". OpenAI. Retrieved 6 April 2023. Our mission is to ensure that artificial general intelligence benefits all of humanity.
  3. ^ Heath, Alex (18 January 2024). "Mark Zuckerberg's new goal is creating artificial general intelligence". The Verge. Retrieved 13 June 2024. Our vision is to build AI that is better than human-level at all of the human senses.
  4. ^ Baum, Seth D. (2020). A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (PDF) (Report). Global Catastrophic Risk Institute. Retrieved 28 November 2024. 72 AGI R&D projects were identified as being active in 2020.
  5. ^ "AI timelines: What do experts in artificial intelligence expect for the future?". Our World in Data. Retrieved 6 April 2023.
  6. ^ Metz, Cade (15 May 2023). "Some Researchers Say A.I. Is Already Here, Stirring Debate in Tech Circles". The New York Times. Retrieved 18 May 2023.
  7. ^ "AI pioneer Geoffrey Hinton quits Google and warns of danger ahead". The New York Times. 1 May 2023. Retrieved 2 May 2023. It is hard to see how you can prevent the bad actors from using it for bad things.
  8. ^ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric (2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv preprint. arXiv:2303.12712. GPT-4 shows sparks of AGI.
  9. ^ Butler, Octavia E. (1993). Parable of the Sower. Grand Central Publishing. ISBN 978-0-4466-7550-5. All that you touch you change. All that you change changes you.
  10. ^ Vinge, Vernor (1992). A Fire Upon the Deep. Tor Books. ISBN 978-0-8125-1528-2. The Singularity is coming.
  11. ^ Morozov, Evgeny (30 June 2023). "The True Threat of Artificial Intelligence". The New York Times. The real threat is not AI itself but the way we deploy it.
  12. ^ "Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks". ABC News. 23 March 2023. Retrieved 6 April 2023. AGI could pose existential risks to humanity.
  13. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN 978-0-1996-7811-2. The first superintelligence will be the last invention that humanity needs to make.
  14. ^ Roose, Kevin (30 May 2023). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. Mitigating the risk of extinction from AI should be a global priority.
  15. ^ "Statement on AI Risk". Center for AI Safety. Retrieved 1 March 2024. AI experts warn of risk of extinction from AI.
  16. ^ Mitchell, Melanie (30 May 2023). "Are AI's Doomsday Scenarios Worth Taking Seriously?". The New York Times. We are far from creating machines that can outthink us in general ways.
  17. ^ LeCun, Yann (June 2023). "AGI does not present an existential risk". Medium. There is no reason to fear AI as an existential threat.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Nelliwinne