Opšta veštačka inteligencija — разлика између измена
. ознака: везе до вишезначних одредница |
(нема разлике)
|
Верзија на датум 19. март 2024. у 22:07
Један корисник управо ради на овом чланку. Молимо остале кориснике да му допусте да заврши са радом. Ако имате коментаре и питања у вези са чланком, користите страницу за разговор.
Хвала на стрпљењу. Када радови буду завршени, овај шаблон ће бити уклоњен. Напомене
|
Opšta veštačka inteligencija (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks,[1] as opposed to narrow AI, which is designed for specific tasks.[2] It is one of various definitions of strong AI.
Creating AGI is a primary goal of AI research and of companies such as OpenAI,[3] DeepMind, and Anthropic. A 2020 survey identified 72 active AGI R&D projects spread across 37 countries.[4]
The timeline for AGI development remains a subject of ongoing debate among researchers and experts. Ажурирано: 2023.[ажурирање], some argue that it may be possible in years or decades; others maintain it might take a century or longer; and a minority believe it may never be achieved.[5] There is debate on the exact definition of AGI, and regarding whether modern large language models (LLMs) such as GPT-4 are early, incomplete forms of AGI.[6] AGI is a common topic in science fiction and futures studies.
Contention exists over the potential for AGI to pose a threat to humanity;[7] for example, OpenAI claims to treat it as an existential risk, while others find the development of AGI to be too remote to present a risk.[8][5][9]
Terminologija
AGI is also known as strong AI,[10][11] full AI,[12] human-level AI[5] or general intelligent action.[13] However, some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness.[а] In contrast, weak AI (or narrow AI) is able to solve one specific problem, but lacks general cognitive abilities.[14][11] Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.[а]
Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans,[15] while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.[16]
Napomene
- ^ а б See below for the origin of the term "strong AI", and see the academic definition of "strong AI" and weak AI in the article Chinese room.
Reference
- ^ Heaven, Will Douglas (16. 11. 2023). „Google DeepMind wants to define what counts as artificial general intelligence”. MIT Technology Review (на језику: енглески). Приступљено 2024-03-01.
- ^ Krishna, Sri (2023-02-09). „What is artificial narrow intelligence (ANI)?”. VentureBeat (на језику: енглески). Приступљено 2024-03-01.
- ^ „OpenAI Charter”. openai.com (на језику: енглески). Приступљено 2023-04-06.
- ^ Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (PDF), Global Catastrophic Risk Institute Working Paper 20, Архивирано (PDF) из оригинала 14. 11. 2021. г., Приступљено 13. 1. 2022
- ^ а б в „AI timelines: What do experts in artificial intelligence expect for the future?”. Our World in Data. Приступљено 2023-04-06.
- ^ „Microsoft Researchers Claim GPT-4 Is Showing "Sparks" of AGI”. Futurism. Приступљено 2023-12-13.
- ^ Morozov, Evgeny (30. 6. 2023). „The True Threat of Artificial Intelligence”. The New York Times. Архивирано из оригинала 30. 6. 2023. г. Приступљено 30. 6. 2023.
- ^ „Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks”. ABC News (на језику: енглески). 2023-03-23. Приступљено 2023-04-06.
- ^ „Artificial general intelligence: Are we close, and does it even make sense to try?”. MIT Technology Review (на језику: енглески). Приступљено 2023-04-06.
- ^ Kurzweil 2005, стр. 260.
- ^ а б Kurzweil, Ray (5. 8. 2005a), „Long Live AI”, Forbes, Архивирано из оригинала 2005-08-14. г. : Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."
- ^ „The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013”. Архивирано из оригинала 26. 2. 2014. г. Приступљено 22. 2. 2014.
- ^ Newell & Simon 1976, This is the term they use for "human-level" intelligence in the physical symbol system hypothesis.
- ^ „The Open University on Strong and Weak AI”. Архивирано из оригинала 25. 9. 2009. г. Приступљено 8. 10. 2007.
- ^ „What is artificial superintelligence (ASI)? | Definition from TechTarget”. Enterprise AI (на језику: енглески). Приступљено 2023-10-08.
- ^ „Artificial intelligence is transforming our world – it is on all of us to make sure that it goes well”. Our World in Data. Приступљено 2023-10-08.
Literatura
- UNESCO Science Report: the Race Against Time for Smarter Development. (PDF). Paris: UNESCO. 11. 6. 2021. ISBN 978-92-3-100450-6. Архивирано (PDF) из оригинала 18. 6. 2022. г. Приступљено 22. 9. 2021.
- Aleksander, Igor (1996), Impossible Minds, World Scientific Publishing Company, ISBN 978-1-86094-036-1
- Azevedo FA, Carvalho LR, Grinberg LT, Farfel J, et al. (април 2009), „Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain”, The Journal of Comparative Neurology, 513 (5): 532—541, PMID 19226510, S2CID 5200449, doi:10.1002/cne.21974, Архивирано из оригинала 18. 2. 2021. г., Приступљено 4. 9. 2013
- Berglas, Anthony (2008), Artificial Intelligence will Kill our Grandchildren, Архивирано из оригинала 23. 7. 2014. г., Приступљено 31. 8. 2012
- Chalmers, David (1996), The Conscious Mind, Oxford University Press.
- Clocksin, William (август 2003), „Artificial intelligence and the future”, Philosophical Transactions of the Royal Society A, 361 (1809): 1721—1748, Bibcode:2003RSPTA.361.1721C, PMID 12952683, S2CID 31032007, doi:10.1098/rsta.2003.1232 .
- Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3.
- Darrach, Brad (20. 11. 1970), „Meet Shakey, the First Electronic Person”, Life Magazine, стр. 58—68 .
- Drachman, D. (2005), „Do we have brain to spare?”, Neurology, 64 (12): 2004—2005, PMID 15985565, S2CID 38482114, doi:10.1212/01.WNL.0000166914.38327.BB.
- Feigenbaum, Edward A.; McCorduck, Pamela (1983), The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World, Michael Joseph, ISBN 978-0-7181-2401-4
- Gelernter, David (2010), Dream-logic, the Internet and Artificial Thought, Архивирано из оригинала 26. 7. 2010. г., Приступљено 25. 7. 2010
- Goertzel, Ben; Pennachin, Cassio, ур. (2006), Artificial General Intelligence (PDF), Springer, ISBN 978-3-540-23733-4, Архивирано из оригинала (PDF) 20. 3. 2013. г.
- Goertzel, Ben (децембар 2007), „Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil”, Artificial Intelligence, 171 (18, Special Review Issue): 1161—1173, doi:10.1016/j.artint.2007.10.011 , Архивирано из оригинала 7. 1. 2016. г., Приступљено 1. 4. 2009 .
- Gubrud, Mark (новембар 1997), „Nanotechnology and International Security”, Fifth Foresight Conference on Molecular Nanotechnology, Архивирано из оригинала 29. 5. 2011. г., Приступљено 7. 5. 2011
- Halal, William E. „TechCast Article Series: The Automation of Thought” (PDF). Архивирано из оригинала (PDF) 6. 6. 2013. г.
- Holte, R. C.; Choueiry, B. Y. (2003), „Abstraction and reformulation in artificial intelligence”, Philosophical Transactions of the Royal Society B, 358 (1435): 1197—1204, PMC 1693218 , PMID 12903653, doi:10.1098/rstb.2003.1317.
- Howe, J. (новембар 1994), Artificial Intelligence at Edinburgh University: a Perspective, Архивирано из оригинала 17. 8. 2007. г., Приступљено 30. 8. 2007
- Johnson, Mark (1987), The body in the mind, Chicago, ISBN 978-0-226-40317-5
- Kurzweil, Ray (2005), The Singularity is Near, Viking Press
- Lighthill, Professor Sir James (1973), „Artificial Intelligence: A General Survey”, Artificial Intelligence: a paper symposium, Science Research Council
- Luger, George; Stubblefield, William (2004), Artificial Intelligence: Structures and Strategies for Complex Problem Solving (5th изд.), The Benjamin/Cummings Publishing Company, Inc., стр. 720, ISBN 978-0-8053-4780-7
- McCarthy, John (октобар 2007), „From here to human-level AI”, Artificial Intelligence, 171 (18): 1174—1182, doi:10.1016/j.artint.2007.10.009 .
- McCorduck, Pamela (2004), Machines Who Think (2nd изд.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1
- Moravec, Hans (1976), The Role of Raw Power in Intelligence, Архивирано из оригинала 3. 3. 2016. г., Приступљено 29. 9. 2007
- Moravec, Hans (1988), Mind Children, Harvard University Press
- Moravec, Hans (1998), „When will computer hardware match the human brain?”, Journal of Evolution and Technology, 1, Архивирано из оригинала 15. 6. 2006. г., Приступљено 23. 6. 2006
- Nagel (1974), „What Is it Like to Be a Bat” (PDF), Philosophical Review, 83 (4): 435—50, JSTOR 2183914, doi:10.2307/2183914, Архивирано (PDF) из оригинала 16. 10. 2011. г., Приступљено 7. 11. 2009 .
- Newell, Allen; Simon, H. A. (1963), „GPS: A Program that Simulates Human Thought”, Ур.: Feigenbaum, E. A.; Feldman, J., Computers and Thought, New York: McGraw-Hill
- Newell, Allen; Simon, H. A. (1976). „Computer Science as Empirical Inquiry: Symbols and Search”. Communications of the ACM. 19 (3): 113—126. doi:10.1145/360018.360022 .
- Nilsson, Nils (1998), Artificial Intelligence: A New Synthesis, Morgan Kaufmann Publishers, ISBN 978-1-55860-467-4
- NRC (1999), „Developments in Artificial Intelligence”, Funding a Revolution: Government Support for Computing Research, National Academy Press, Архивирано из оригинала 12. 1. 2008. г., Приступљено 29. 9. 2007
- Omohundro, Steve (2008), The Nature of Self-Improving Artificial Intelligence, presented and distributed at the 2007 Singularity Summit, San Francisco, California
- Poole, David; Mackworth, Alan; Goebel, Randy (1998), Computational Intelligence: A Logical Approach, New York: Oxford University Press, Архивирано из оригинала 25. 7. 2009. г., Приступљено 6. 12. 2007
- Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd изд.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2
- Sandberg, Anders; Boström, Nick (2008), Whole Brain Emulation: A Roadmap (PDF), Technical Report #2008-3, Future of Humanity Institute, Oxford University, Архивирано (PDF) из оригинала 25. 3. 2020. г., Приступљено 5. 4. 2009
- Searle, John (1980), „Minds, Brains and Programs” (PDF), Behavioral and Brain Sciences, 3 (3): 417—457, S2CID 55303721, doi:10.1017/S0140525X00005756, Архивирано (PDF) из оригинала 17. 3. 2019. г., Приступљено 3. 9. 2020
- Simon, H. A. (1965), The Shape of Automation for Men and Management, New York: Harper & Row
- Sutherland, J. G. (1990), „Holographic Model of Memory, Learning, and Expression”, International Journal of Neural Systems, 1—3: 256—267.
- Turing, Alan (октобар 1950), „Computing Machinery and Intelligence”, Mind, LIX (236): 433—460, ISSN 0026-4423, doi:10.1093/mind/LIX.236.433
- de Vega, Manuel; Glenberg, Arthur; Graesser, Arthur, ур. (2008), Symbols and Embodiment: Debates on meaning and cognition, Oxford University Press, ISBN 978-0-19-921727-4.
- Wang, Pei; Goertzel, Ben (2007). „Introduction: Aspects of Artificial General Intelligence”. Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the AGI Workshop 2006. IOS Press. стр. 1—16. ISBN 978-1-58603-758-1. Архивирано из оригинала 18. 2. 2021. г. Приступљено 13. 12. 2020.
- Williams, R. W.; Herrup, K. (1988), „The control of neuron number”, Annual Review of Neuroscience, 11: 423—453, PMID 3284447, doi:10.1146/annurev.ne.11.030188.002231.
- Yudkowsky, Eliezer (2006), „Artificial General Intelligence” (PDF), Annual Review of Psychology, Springer, 49: 585—612, ISBN 978-3-540-23733-4, PMID 9496632, doi:10.1146/annurev.psych.49.1.585, Архивирано из оригинала (PDF) 11. 4. 2009. г. .
- Yudkowsky, Eliezer (2008), „Artificial Intelligence as a Positive and Negative Factor in Global Risk”, Global Catastrophic Risks, Bibcode:2008gcr..book..303Y, ISBN 9780198570509, doi:10.1093/oso/9780198570509.003.0021.
- Zucker, Jean-Daniel (јул 2003), „A grounded theory of abstraction in artificial intelligence”, Philosophical Transactions of the Royal Society B, 358 (1435): 1293—1309, PMC 1693211 , PMID 12903672, doi:10.1098/rstb.2003.1308 .
- Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.)
- Domingos, Pedro, "Our Digital Doubles: AI will serve our species, not control it", Scientific American, vol. 319, no. 3 (September 2018), pp. 88–93. "AIs are like autistic savants and will remain so for the foreseeable future.... AIs lack common sense and can easily make errors that a human never would... They are also liable to take our instructions too literally, giving us precisely what we asked for instead of what we actually wanted." (p. 93.)
- Gleick, James, "The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)
- Hanna, Alex, and Emily M. Bender, "Theoretical AI Harms Are a Distraction: Fearmongering about artificial intelligence's potential to end humanity shrouds the real harm it already causes", Scientific American, vol 330, no. 2 (February 2024), pp. 69–70. "[H]ype [about "existential risks"] surrounds many AI firms, but their technology already enables myriad harms, including... discrimination in housing, criminal justice, and health care, as well as the spread of hate speech and misinformation... Large language models extrude... fluent... coherent-seeming text but have no understanding of what the text means, let alone the ability to reason.... (p. 69.) [T]hat output... becomes a noxious... insidious pollutant of our information ecosystem.... [T]oo many... publications [about] AI come from corporate labs or... academic groups that receive... industry funding. Many of these publications are based on junk science [that] is nonreproducible... is full of hype, and uses evaluation methods that do not measure what they purport to... Meanwhile 'AI doomers' cite this junk science... to [misdirect] attention [to] the fantasy of all-powerful machines possibly going rogue and destroying humanity." (p. 70.)
- Hughes-Castleberry, Kenna, "A Murder Mystery Puzzle: The literary puzzle Cain's Jawbone, which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", Scientific American, vol. 329, no. 4 (November 2023), pp. 81–82. "This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose." (p. 82.)
- Immerwahr, Daniel, "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", The New Yorker, 20 November 2023, pp. 54–59. "If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones." (p. 59.)
- Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 61–63. Marcus points out a so far insuperable stumbling block to artificial intelligence: an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways. Our brain is so good at comprehending language that we do not usually notice." A prominent example is the "pronoun disambiguation problem" ("PDP"): a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers.
- Marcus, Gary, "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems", Scientific American, vol. 327, no. 4 (October 2022), pp. 42–45.
- Press, Eyal, "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", The New Yorker, 20 November 2023, pp. 20–26.
- Roivainen, Eka, "AI's IQ: ChatGPT aced a [standard intelligence] test but showed that intelligence cannot be measured by IQ alone", Scientific American, vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."