Symbolic artificial intelligence

In artificial intelligence, symbolic artificial intelligence (also known as classical artificial intelligence or logic-based artificial intelligence)[1][2] is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search.[3] Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems (in particular, expert systems), symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the mid-1990s.[4] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the ultimate goal of their field.[citation needed] An early boom, with early successes such as the Logic Theorist and Samuel's Checkers Playing Program, led to unrealistic expectations and promises and was followed by the first AI Winter as funding dried up.[5][6] A second boom (1969–1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace.[7][8] That boom, and some early successes, e.g., with XCON at DEC, was followed again by later disappointment.[8] Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-domain problems arose. Another, second, AI Winter (1988–2011) followed.[9] Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.[10] Uncertainty was addressed with formal methods such as hidden Markov models, Bayesian reasoning, and statistical relational learning.[11][12] Symbolic machine learning addressed the knowledge acquisition problem with contributions including Version Space, Valiant's PAC learning, Quinlan's ID3 decision-tree learning, case-based learning, and inductive logic programming to learn relations.[13]

Neural networks, a subsymbolic approach, had been pursued from early days and reemerged strongly in 2012. Early examples are Rosenblatt's perceptron learning work, the backpropagation work of Rumelhart, Hinton and Williams,[14] and work in convolutional neural networks by LeCun et al. in 1989.[15] However, neural networks were not viewed as successful until about 2012: "Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ... A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks."[16] Over the next several years, deep learning had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation. However, since 2020, as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; an increasing number of AI researchers have called for combining the best of both the symbolic and neural network approaches[17][18] and addressing areas that both approaches have difficulty with, such as common-sense reasoning.[16]

  1. ^ Garnelo, Marta; Shanahan, Murray (October 2019). "Reconciling deep learning with symbolic artificial intelligence: representing objects and relations". Current Opinion in Behavioral Sciences. 29: 17–23. doi:10.1016/j.cobeha.2018.12.010.
  2. ^ Thomason, Richmond (February 27, 2024). "Logic-Based Artificial Intelligence". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  3. ^ Garnelo, Marta; Shanahan, Murray (2019-10-01). "Reconciling deep learning with symbolic artificial intelligence: representing objects and relations". Current Opinion in Behavioral Sciences. 29: 17–23. doi:10.1016/j.cobeha.2018.12.010. hdl:10044/1/67796. S2CID 72336067.
  4. ^ Kolata 1982.
  5. ^ Kautz 2022, pp. 107–109.
  6. ^ Russell & Norvig 2021, p. 19.
  7. ^ Russell & Norvig 2021, pp. 22–23.
  8. ^ a b Kautz 2022, pp. 109–110.
  9. ^ Kautz 2022, p. 110.
  10. ^ Kautz 2022, pp. 110–111.
  11. ^ Russell & Norvig 2021, p. 25.
  12. ^ Kautz 2022, p. 111.
  13. ^ Kautz 2020, pp. 110–111.
  14. ^ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0. ISSN 1476-4687. S2CID 205001834.
  15. ^ LeCun, Y.; Boser, B.; Denker, I.; Henderson, D.; Howard, R.; Hubbard, W.; Tackel, L. (1989). "Backpropagation Applied to Handwritten Zip Code Recognition". Neural Computation. 1 (4): 541–551. doi:10.1162/neco.1989.1.4.541. S2CID 41312633.
  16. ^ a b Marcus & Davis 2019.
  17. ^ Rossi, Francesca. "Thinking Fast and Slow in AI". AAAI. Retrieved 5 July 2022.
  18. ^ Selman, Bart. "AAAI Presidential Address: The State of AI". AAAI. Retrieved 5 July 2022.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Nelliwinne