Understanding how the human brain works is a persistent enigma. There are many dead ends in the maze that is cognitive science. In this post, I address one: the use of the word learning used in AI, as in machine learning or unsupervised learning.
Potter's clay is a very plastic and expressive medium. If I were to mold wet clay into a bust of Alan Turing, would you say that the clay learned to resemble Alan Turing? I don't think so. You would say that I sculpted it. Not only did I give it shape, but I gave it meaning as well: it represents a famous computer scientist and cryptologist. The clay played the most passive role of remembering remaining in whatever shape I gave it.
If I wish to create a hotdog recognition app [1], I gather pictures of hotdogs and non-hotdog items. I then feed these images from the two categories into a neural network. The back-propagation algorithm adjusts node weights to classify images accurately as hot dog or not hot dog. Berkeley professor Dr. Gopnik says, “We call it ‘artificial intelligence’, but a better name might be ‘extracting statistical patterns from large data sets’”.
Similar to a sculptor shaping clay and giving it meaning, knowledge engineers program and give meaning to neural networks. This may seem like a distinction without a difference to anyone that has never raised an infant, but young parents know otherwise.
When my son Charlie was born, he—like all human infants—was unaware that his own arms and legs were part of him or that he had control over them. Yet, in a few short years, this helpless infant grew into a running, throwing, strategizing Little League baseball player. I cannot take credit for that transformation. Heaven knows, I did not embed or program knowledge of baseball into his brain. He learned to play baseball by observing, listening, practicing, and leveraging billions of years of evolution. Any parent immediately recognizes the veracious appetite for learning in children [2].
A neural network is an example of so called "supervised learning". AI is synonymous with "machine learning". But no learning is taking place. We humans love to anthropomorphize things. It makes AI appear more familiar without actually explaining anything. This metaphorical language also hides how AI and animal cognition are different. Despite their superficial similarities, AI and cognition have completely distinct internal mechanisms: like electric motors and internal combustion engines. The more we anthropomorphize AI, the less likely we are to recognize the vast gulf that separates AI and natural intelligence.
[1] One of the funniest episodes of HBO's Silicon Valley (Season 4 Episode 4) is when Jian-Yang demos a hotdog recognition app.
[2] The idea that "a child is a container that teachers pour knowledge into" explains why so many children hate school. The best teachers and parents do not treat their students as containers but as consumers of knowledge. Good teachers, like good cooks, stimulate an appetite in children to gratefully consume knowledge.
....but Tom, your definition of learning is not that different from deep A.I. learning. Take the hot dog example and apply it to humans: we recognize a hot dog because we have seen images of multiple hot dogs and NOT hot dogs, and "learned" what is a hot dog and what isn't. That seems pretty similar to generative A.I. If you show me a rubber hot dog and ask me what it is, I'll still say it is a hot dog: so will A.I. as long as all we have programmed into the computer are images. We are only using the sight sense in this example, but you could extrapolate the same argument to the other senses. Program enough data what a hot dog taste like, feels like, smells like and I think the computer will be able to differentiate based on sight, texture, smell, touch and taste what is a hot dog and what isn't (and even differentiate it from a polish). All it needs is the programming. I'd argue that is all the human mind needs also. I'm not sure human learning is all that different from programming. A.I. also does learn from its mistakes, that's how it is able to win at chess or navigate a maze. Lastly, what percentage of the population is good at things A.I. does very well i.e. computation, pattern recognition, translation, language, writing etc. Answer: Not many.. My point A.I. doesn't have to be very good to best most people at the places where it will be used, because most people are pretty poor at these things. McDonalds knew what they were doing when they replaced the cash register with pictures and push buttons so employees wouldn't need to calculate change. Bottomline: Programming is how a computer learns and that isn't too different from how humans learn and A.I. is starting to learn certain subjects better than humans can.
Wait a minute---the Hot Dog, Not Hot Dog clip is indeed funny and demostrates the "non-learning" difference between machines and humans, especially in the early days of A.I. But wait--this is 2023, not 1995 and because computer power is now a trillion times more powerful than it was in 1995 a generative A. I. system has assimilated millions of pictures of hot dogs and NOT hot dogs and can now identify a hot dog better than a human can. We see this daily in A.I. reading radiology scans better at picking up cancers than board certified radiologists, or recognizing a face better than a human can, or being able to learn chess in 4 hours to beat the best human chess player, or do the same with the game GO, etc. etc. I could go on and on, but you get the point. What is learning anyhow? Is it not the ability to differentiate, provide an answer, recognize a pattern and answer a question? In so many areas, generative A.I performs better than humans do. I agree it isn't perfect--yet. A.I. applications that are now being commercialized are only 5 years old (despite the fact programmers have been working in this area for 40 years). Frankly, if A.I. can learn chess well enough to beat a Chess Grand Master in only 4 hours, I don't care how you categorize "learning", it learns much better than I do.