Companies are falling over each other to try to grab the lead in AI. And the Next Big Thing has appeared in the form of generative AI like ChatGPT. It will clearly bring benefits to defined sectors such as medicine, travel, and coding. But the risks are real, as noted by the A.I. Safety Summit held in Britain last month hosted by Prime Minister Rishi Sunak and attended by 28 countries including the U.S. and China. While acknowledging the potential benefits of AI systems, they “also present significant dangers in terms of job losses, disinformation, and national security.” I wish to expand on the risk by looking at AI in the context of how we have evolved from prehistoric hominids to modern humans. 

A far-reaching discovery of our hominid ancestors was the use of fire. Before fire, hominids like Homo erectus spent eight hours a day feeding themselves, including chewing endlessly for digestion. Even then, they only extracted 30% of the nutrients from the rough food. The rest was rubbish. Once fire came into use, there was considerably less need for chewing, and the later hominids were able to extract almost 100% of the nutrients. And they needed only four hours a day feeding themselves instead of eight. This eventually led to modern human behavior such as language, music, and visual arts. Fire made us human. 

Today, the Internet has flooded our lives with information. We spend three to eight hours a day or more consuming information. But we don’t digest everything that comes in. No more than 30% of the email messages I receive daily are worth even looking at. The same goes for social media. So not only are we spending about the same time as Homo erectus on what we consume, we get a similar, lowly proportion of nutrients from it. We are at a primitive, pre-fire stage in the information age. What we urgently need is fire for our times. Is generative AI the answer? The ambition of AI systems to become Artificial General Intelligence poses the real threat of flooding our lives with exponentially more rubbish. This is already happening. We hear more and more cases of machine-learning hallucinations, and deepfake videos are undermining trust in the very basic fabric of human society. And this is just the beginning.

What would be the specs for the modern-day fire? We can get a hint or two from how pre-historic fire transformed the earlier hominid brain into the high-powered brain we possess today.

The problem with generative AI and the large language models it is based on is that it works precisely in the reverse of how we developed as modern humans. LLMs like ChatGPT devour huge amounts of data, with algorithms that do pattern matching in lightning speed. And there is no end to having to feed it more raw data. 

In contrast, early humans required less data, far less. We just got smarter. As paleoanthropologists have pointed out, our brain began to cherrypick the incoming data in order to build a symbolic representation by which to interact with the world. This gave rise to art, refined tools, and, above all else, language, which allowed us to systematically organize selected data into something that we could make sense out of. This had a huge consequence on the size of the brain. After tripling in size over a period of three million years, our brain has in the last 20,000 years shrunk by 150 cc, equivalent to the size of a tennis ball. Symbolic thinking does away with having to process all that raw data — much of it rubbish — coming in, so we shed brain membrane. The fire that we need should lead to Smaller Language Models that are smart, and not the mindless LLMs devouring endless amounts of data.

What is symbolic thinking? It is the ability to think in abstract symbols, which opens the way for hypothetical thinking, like what is right and what is wrong. This was an important part of becoming a modern human. Darwin noted that of all the differences “between man and the lower animals, the moral sense or conscience is by far the most important.” LLMs don’t have any moral yearning. ChatGPT has 179 billion machine learning parameters, but not a single one could tell you if something is true or false, good or bad. That’s why we have to be seriously concerned about the role AI systems may play in society. We need to invent the fire of our times before we let AI loose on our society. The fire will have to balance the yearning for technology with moral yearnings that we developed as modern humans.

What does this fire look like? Like past innovations in evolution, it is right in front of us, looking at us. Maybe, just maybe, what we are looking at is ourselves in the mirror. The late Michael Dertouzos, who headed the Laboratory for Computer Science at MIT, referred to the separation the society instituted in the nineteenth century between science-technology and humanities-arts as a terrible mistake we made in history. We need to bring them back together, he said. If we can create a society that brings together the two great achievements of humanity, technology/science and humanities/arts, in order to fully embrace moral as well as technology yearnings, and connect all that to AI systems that are smart, we might take ourselves to the next stage in evolution. Homo erectus existed for 1.5 million years. Humans have been around for less than 300,000 years. With nuclear threat and global warming, we likely won’t make it anywhere near 300,000. Unless we put fire under us and try to flee the rubbish-laden society we’ve come to occupy.

The work for this article was funded in part by the São Paulo Research Foundation (FAPESP) (grant no. 2018/18900-1), research project “Innovations in Human and Non-Human Animal Communities.”