Ever since these models appeared, researchers and philosophers have debated whether they are complex ‘stochastic parrots’ or whether they are indeed capable of understanding and will eventually be smarter than us.
It’s an important question: if foundation models – the catch-all term – can truly understand in the way humans do, it might be a step towards superintelligence, and even a form of artificial sentience.
The debate has been muddy because people often talk past each other due to differing definitions of ‘meaning’ and ‘understanding.’ But researchers at Amazon Web Services (AWS) have proposed using some uncontroversial definitions to help align the discussion – and according to those definitions, they believe foundation model understanding is not only possible but inevitable as these models scale.
„These definitions are useful in the sense that they align with definitions that people have used in mathematical logic, as well as mathematical model theory, as well as epistemology, dynamical systems, and linguistics and so on and so forth,“ said Stefano Soatto, Amazon Web Services’ vice president and distinguished scientist, who coauthored a paper on the subject.
For ‘meaning,’ the researchers use a definition first proposed by the 17th century philosopher Antoine Arnauld, which defined it as the relationship between linguistic form (the text the models are trained on) and the thing or concept that linguistic form points to (which is external to language). The word ‘tree’ is not a tree, but it points to the concept of ‘tree.’ The meaning of the word tree is its relationship with the concept ‘tree.’
Soatto and his fellow applied scientist Matthew Trager explain in a paper published earlier this month that these meanings can be represented as either vectors in an abstract space, or probability distributions in the same space. This allows us to discuss notions of proximity (which meanings are like each other), and to some extent entailment (what implies what) and causality (what follows what).
They define ‘understanding,’ meanwhile, as equivalence classes of expressions, which form a representation of that meaning – or the relationship between the word and what it refers to. Large Foundation Models represent such equivalence classes, viewed as either vectors or distribution of continuations. This allows them to reason and operate on the meaning without storing every detail, they said.
Consider the number pi (π). Pi is an infinite, non-repeating decimal number (3.14159265358979…). No one could possibly memorize or store in their brain the entire infinite sequence of digits. But we can still understand and work with the concept of pi.
How? By having a representation in our minds that captures the essential properties of pi – that it is the ratio of a circle’s circumference to its diameter, that it starts with 3.14, that it goes on forever without repeating, etc. With that mental representation, we don’t need to recall the entire unending string of digits.
“Our representation is an abstract concept, just like the meaning of a sentence,“ Soatto said in a phone call.
More importantly, this representation allows us to answer specific questions and do things with the concept of pi. For example, if I asked you what the millionth digit of pi is, you wouldn’t have to recite a million digits to prove you understand pi. Your mental representation would allow you to work it out or look it up, using a finite amount of brainpower and time.
So, understanding an abstract concept means forming a mental model or representation that captures the relationship between language describing a concept and key properties of that concept. That mental model or representation can be reasoned with and queried using our limited human memory, mental processing power, and access to data – rather than trying to literally store an infinite amount of raw data. Building a concise yet powerful representation, according to this view, demonstrates true understanding.
with scale, the researchers believe, these models begin to understand
Models trained as next-token predictors learn stochastic patterns. But over time, they identify relationships – meanings – between all the data they’ve seen and, as they scale, develop their own internal languages describing those relationships. Inner languages emerge when a model reaches hundreds of billions of parameters, Soatto says.
Humans acquire language using only around 1.5 megabytes of data (less than a 1980s floppy disk). “Humans are proof that you don’t need specialized logic hardware or dedicated hardware with tight logical constraints to do math,” Soatto said.
But Trager and Soatto suggest that large-scale models must „find their way“ around their large „brains“ to shed redundant information and improve relational discovery. By doing so, these models can enhance their ability to discover meaningful relationships within the data they are trained on.
As that relational discovery improves with scale, the researchers believe these models begin to understand; in other words, form representations of those relationships, of those meanings, that they then can operate on. Even if they do not currently understand in this way, the researchers say, they likely will with continued scaling.
There is a concept in biology of a ‘critical period,’ when an organism’s nervous system is particularly receptive to learning specific skills or traits. The period is marked by high levels of brain plasticity before neural connections become more fixed.
Foundation models also have a critical period, the researchers believe, during which they need to accumulate a significant amount of information. The model can then discard some of this accumulated information while enhancing its ability to identify relationships among the data it encounters during inference.
It’s not clear when in the evolutionary tree that happens in biology – it’s not like a light switch between understanding meaning and not. It’s a spectrum that emerges with the scale of neurons in the brain. An amoeba probably doesn’t have it; a squirrel likely has some. As machines scale, these researchers believe that understanding of meaning, under their definition, will emerge as it does in biology.
“Scale plays a key role in this emergent phenomenon,” Soatto said. “Where does it stop? Right now, there is no obvious end in sight.”
The problem, he said, is that we may never definitively know when understanding emerges.
With machine learning models at scale, you can prove that (a) they can represent abstract concepts, such as pi, (b) they currently do not, at least not reliably, and (c) if and when they do, we cannot know for sure. This is no different from humans: When we teach students, how do we know they “understood” the material? We administer an exam: if they fail, we know they did not. If they pass, do we know they got it? Had we asked one more question, would they have answered correctly? We won’t know until we ask”
Yet, unlike humans, these large models are not opaque systems. We can measure, audit, and calibrate them. This is a key advantage over the opacity of the human mind and holds the promise of understanding not only how these properties emerge in a foundation model, but how they emerge in the human mind.
Be the first to comment