Aníbal Pauchard and I argued in Observation and Ecology that despite the great advances information technology has helped us make in complex fields like ecology, the increasing time both children and adults spend in front of screens instead of out in nature will erode our abilities to deal with complexity. Our argument was experiential – our years of working in the field with many students showed us that those with the best abilities to discern patterns were those who spent abundant time as children just wandering around observing nature – and intuitive – how could a system that is always reducible to binary functions challenge and feed the brain with the cognitive complexity of the continuous functions of nature? Recent studies are backing up these observations with more finely focused data.

In particular, scientists are finding that information physically printed on good old paper and bound in books is better remembered and understood than when shared on an electronic screen. There is a nicely written review of this work by Ferris Jabr in the November 2013 Scientific American which shows that this relatively simple observation, one that many people I talk with have come to believe (“I still have to print out my PDF reprints to read them,” is a common admission I hear from colleagues), belies many richer phenomena about language processing swirling in our minds and in our society. These phenomena are both deeply evolutionary and contemporary functions of how we deal with complex information.

The deep time side of this is that our Stone Age brains never had or needed a way to process written symbolic language. When that technology came along—unprecedented in evolutionary history—our brains needed a way to store this new way of transmitting information. So naturally, our brains adapted by tinkering and cajoling old parts into new roles. The result, like most evolutionary messing about, was not a beautifully efficient machine for shunting syntax around the mind, but a Rube Goldberg contraption that apparently lashes together and repurposes various neural architectures originally devoted to object recognition, vision, coordination, and speaking. These Frankenstein circuits do weird things. In some studies cited by Jabr, they light up like the brain is reading when a child practices writing, but they do nothing when the child is typing. In the mirror image of those studies, brain regions associated with hand motions apparently fire up when people are merely reading complex characters on paper. The emergent effect of this Gehry-like neural architecture is that we process written language as if it is a physical object in space.

This naturally leads to the contemporary way we use written words and the effects they have on our memories and psyches. Jabr makes a nice connection between the spatial arrangement of words on paper and a map. Developing maps of the world is what we do as babies, learning to deal with complexity (as outlined in Alison Gopnik’s excellent The Philosophical Baby) and what adults with apparently supercharged memories do to memorize large amounts of information. As outlined in memory champion Joshua Foer’s popular book Moonwalking with Einstein, the way to memorize large amounts of information is to map out in your mind the pieces of data in a three-dimensional space like a house.

In this regard, a physical book is literally a map of information – its compass laid out in the four corners of the page, its topography represented by the heft of left hand pages already read or right hand pages to come. Referencing these maps is how many people, including myself, recall information that was read – when I look for a particular passage I want to cite, I think about where it was with regard to the cardinal directions (“it was in the lower left side”) and where in the book it was (“somewhere near the beginning”). As Jabr eloquently notes, “Turning the pages of a paper book is like leaving one footprint after another on a trail.” By contrast, navigating in this way on an e-reader is like strolling through downtown Owl, North Dakota in the middle of a whiteout snowstorm – the continuous scroll is a featureless landscape.

This spatial and tangible-object theory of written language contextualizes a chance observation my mother made a while back when she volunteered to teach an older illiterate man how to read. She found that he had a lot of difficulty identifying his letters if they were printed in a font different than that of his literacy primer. Because he hadn’t learned his alphabet simultaneously with his early cognitive mapping of the physical world, he couldn’t map one similar shape onto another – each new “R” was a completely novel experience, rather than a symbol of a common “R.” So my mother, a graphic designer, printed up sheets for each letter of the alphabet in dozens of different fonts cribbed from her Letraset books (sorry for aging you, Mom!).

In this light, it pains me to see the continual boosterism for getting more screens in front of more people, especially children. The urgency by which we shove screens in front of children is often rationalized by the notion that they “have to learn how to operate in a digital world.” No one of my non-digital-native generation ever touched a computer before high school, yet some of us invented Netscape and Google and thousands of other busts and breakouts and the rest of us do virtually everything work related on a dizzying array of digital devices that are turned over for new devices every eight months or so. I have never felt handicapped for not having spent my formative years in front of a screen.

At the same time there are lots of stories out there about the electronic prowess of the digitally native Millennials. I find these breathless reports to massively overblown and misleading. My 12-year-old daughter picks up any new digital device and masters its basic functions quickly enough, but she still asks me – a Gen Xer who didn’t touch a real computer until I was 15 or so – for help with various complex functions of Word or PowerPoint. My 4-year-old niece on Christmas Day certainly learned quickly how to navigate to the drawing app on her brand new Nabi, but her frantic tapping on the screen to change an outlined shape from solid orange to solid green gave me little hope she was actually learning how to draw. The challenge of learning how to interact with a screen is trivial and will actually get easier with time, even for people who never touch a computer or see a screen until old age. But the reverse is not true – the problem of learning how to interact with nature, and to discern patterns out of its complexity, only gets harder the less it is practiced.

About Rafe Sagarin

Rafe Sagarin is an assistant research scientist, marine ecologist, and environmental policy analyst, Institute of the Environment, University of Arizona. He is the co-author of Observation and Ecology with Aníbal Pauchard.