Symbolic artificial intelligence

An artistic representation of AI where a cross section of a human head and brain in profile is mixed with a circuit like background and overlay
An artistic representation of AI

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search.[1] Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems (in particular, expert systems), symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the mid-1990s.[2] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the ultimate goal of their field.[citation needed] An early boom, with early successes such as the Logic Theorist and Samuel's Checkers Playing Program, led to unrealistic expectations and promises and was followed by the First AI Winter as funding dried up.[3][4] A second boom (1969–1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace.[5][6] That boom, and some early successes, e.g., with XCON at DEC, was followed again by later disappointment.[6] Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-domain problems arose. Another, second, AI Winter (1988–2011) followed.[7] Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.[8] Uncertainty was addressed with formal methods such as hidden Markov models, Bayesian reasoning, and statistical relational learning.[9][10] Symbolic machine learning addressed the knowledge acquisition problem with contributions including Version Space, Valiant's PAC learning, Quinlan's ID3 decision-tree learning, case-based learning, and inductive logic programming to learn relations.[11]

Neural networks, a subsymbolic approach, had been pursued from early days and reemerged strongly in 2012. Early examples are Rosenblatt's perceptron learning work, the backpropagation work of Rumelhart, Hinton and Williams,[12] and work in convolutional neural networks by LeCun et al. in 1989.[13] However, neural networks were not viewed as successful until about 2012: "Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ... A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks."[14] Over the next several years, deep learning had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation. However, since 2020, as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; an increasing number of AI researchers have called for combining the best of both the symbolic and neural network approaches[15][16] and addressing areas that both approaches have difficulty with, such as common-sense reasoning.[14]

  1. ^ Garnelo, Marta; Shanahan, Murray (2019-10-01). "Reconciling deep learning with symbolic artificial intelligence: representing objects and relations". Current Opinion in Behavioral Sciences. 29: 17–23. doi:10.1016/j.cobeha.2018.12.010. hdl:10044/1/67796. S2CID 72336067.
  2. ^ Kolata 1982.
  3. ^ Kautz 2022, pp. 107–109.
  4. ^ Russell & Norvig 2021, p. 19.
  5. ^ Russell & Norvig 2021, pp. 22–23.
  6. ^ a b Kautz 2022, pp. 109–110.
  7. ^ Kautz 2022, p. 110.
  8. ^ Kautz 2022, pp. 110–111.
  9. ^ Russell & Norvig 2021, p. 25.
  10. ^ Kautz 2022, p. 111.
  11. ^ Kautz 2020, pp. 110–111.
  12. ^ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0. ISSN 1476-4687. S2CID 205001834.
  13. ^ LeCun, Y.; Boser, B.; Denker, I.; Henderson, D.; Howard, R.; Hubbard, W.; Tackel, L. (1989). "Backpropagation Applied to Handwritten Zip Code Recognition". Neural Computation. 1 (4): 541–551. doi:10.1162/neco.1989.1.4.541. S2CID 41312633.
  14. ^ a b Marcus & Davis 2019.
  15. ^ Rossi, Francesca. "Thinking Fast and Slow in AI". AAAI. Retrieved 5 July 2022.
  16. ^ Selman, Bart. "AAAI Presidential Address: The State of AI". AAAI. Retrieved 5 July 2022.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Nelliwinne