Bicycles for the Mind and Software-Augmented Mentats
A look through the history of augmenting the human mind and some thoughts on the future
Human Software
Rituals and routines are human software, memes that spread because they are [perceived as] useful.
I generally think of the mind not as a single program running on some hardware, but rather as many little pieces of software. They are downloaded and synthesized via mimesis and observation, and they interact in various ways, sometimes positively and sometimes negatively. They compose surprisingly well and seem to rely a lot on semantic/associative links.
Clearly the brain stores a lot more knowledge than it can properly index. “List every fact you know” is a laughably impossible task, but clearly all that information is in there somewhere and retrievable somehow. Any fact that you have learned and have not yet forgotten can be summoned with the right stimulus. Sometimes things are very deeply buried. There are things deep in your memory that may be there for the rest of your life, but that you may never think of again for lack of the proper stimuli.
Replicating human intelligence very likely is not merely a matter of copying the software, but also copying the architecture by which this software can be composed, as well as the ability to rapidly acquire new software. Existing neural networks are far too data-intensive, requiring many thousands of examples to learn what humans can learn with only a couple examples.
Digital software allows us to automate many previously cognitive tasks. There is great merit to the computer analogy for the brain, though the brain is by all means a very alien form of computer. Nevertheless, the ultimate limits of computing will apply just the same to both, and the two should be expected to converge in many ways as pushed toward their limits, as they in many ways can be expected to have similar limits.
But human software can rely on a great deal of common sense, general knowledge, and likely a great deal of deeply buried cognitive infrastructure. While all running on very similar neural hardware, people of wildly different cultures can see the world very differently, and certainly people from merely a few centuries ago would struggle in the modern world, let alone prehistoric people. Even notions like progress and linear time are historically abnormal and relatively recent developments. Human and animal sacrifice was for a very long time a near-universal human practice, understood by all to be critically necessary in a way that modern people clearly fail to recognize.
Digital software may formally encode a great deal of what’s necessary to accomplish a task, but there’s also a great deal that it doesn’t encode. The entire problem of human-computer interaction is of course the interaction between digital software and human software, and the human side will continue to lack formal specification. Whatever drives a person to use a piece of software, whatever method they use to gather information from their mind and from the world and funnel it into the machine, is all human software that is completely independent of that which is stored in the machine. It is just as critical if not moreso, it cannot be abstracted away or removed, and it can vary enormously, even from person to person.
I remember a conversation with my mother, a nurse, about medical record-keeping software. There is standard, near-monopolized software, and there are major efforts to standardize things further, but to a large extent every hospital will use the same software in different and idiosyncratic ways.
As we better understand software, the brain, and the harmonized interaction between the two, we'll be able to push even greater limits of what each can do.
Computers as a Bicycle for the Mind
We’ve had writing for about 54 centuries, with proto-writing systems dating back at least 200.
We’ve had sculpture and other sophisticated visual arts (cave painting, etc.) for at least 430 centuries, with more primitive abstract art dating back perhaps a thousand centuries or more.
Modern computing is less than one century old. I don’t think we’ve achieved Steve Jobs’ goal of a “bicycle for the mind” – a tool for greatly augmenting human thought, analogous to how a bicycle dramatically increases the efficiency of human locomotion. As impactful as the smartphone and personal computer have been, the idea that we have it all figured out and that deeper and more valuable use of computers is not possible is an obvious failure of imagination.
Perhaps falling short of such an ambitious goal is a failure on Jobs’ part, but I expect it is more likely that this project is not merely a simple project that can be shipped in a few years but a great human project that will take centuries. He himself argued that computing a century from now would be vastly better than today while explaining the bicycle analogy.
To a large extent, human excellence is not a matter of nature but of environment and routine. Your habits and behaviors contribute enormously to shaping your mind and body. While perhaps raw genetically-influenced brainpower is a significant factor, often the most remarkable people have undergone great and unusual hardships, or come from strange and unique environments, or are simply very mentally ill. Great and influential people are often otherwise ordinary people who have some internal contradiction or conflict intensely motivated to change the world around them to resolve it.
There is only so much information about who someone is that can be encoded in and influenced by genes, much of the rest will be environment. Likewise, the behavior of a computer can vary extremely wildly based on how it is programmed. There are more tiny programs that fit into a mere 32 bytes than there are atoms in the universe, more than could ever be explored. There are short snippets of code that could in principle be typed into a computer that could achieve remarkable things that no civilization – human or alien or otherwise – will ever live long enough to see. The Library of Babel is vast.
A person who can train eight hours a day for years in a sport has a much better shot at being a world-class athlete than someone who spends their time differently. A bright person prone to deep thought may form a recognizably different style of intelligence from someone who reads voraciously while spending relatively little time reflecting deeply on or synthesizing the disparate things they have read. Someone who spends their time debating others will cultivate another very different skillset, as will someone who negotiates business deals all day.
Someone who spends their time thinking over things purely in their head may have a more entangled web of knowledge while someone who spends their time writing may perhaps become better at serializing their thoughts. Someone who does calculations by hand may gain a very different intuition for numbers than someone who types their calculations into a computer. If we bring a wider range of software into people’s lives, the way our tools for thought shape how we think may continue to shape new styles of thought that we have yet to see.
I would not be surprised if the Greek’s approaches to axiomatic mathematics and formal logic were downstream from their early adoption of alphabetic writing, which reduced writing systems which were previously extremely complex and logographic down to a relatively small set of general symbols. More ancient mathematics and reasoning was often more intuitive, arithmetic algorithms taught through repetition but lacking any notion of formal proof. Students would often be told a procedure, told to go through hundreds of examples pressing numbers into clay or writing on papyrus, and then eventually just intuit the patterns and understanding as to why the magical procedure worked.
Humans may have common interfaces such as language and culture, but this hides an enormous amount of underlying diversity and complexity. Different people, even with a tremendous amount of shared understanding, can nonetheless have wildly different ways of thinking. There are billions of bits of information flashing around the human brain every second, and many tens or hundreds of trillions of synapses. Meanwhile, human speech maxes out at only a few dozen bits per second. There is certainly much more data embedded in body language – every part of the cortex talks to and influences every other, and the brain does a great deal of thinking through action – but this is not well-understood or very consciously acted out, and mapping billions of neuron spikes to a couple hundred muscles still is necessarily an extremely lossy process.
Historical Human Augmentation
Early Greek mathematics often was more appropriation than invention, with many Greeks sailing to far-off lands in search of ancient geometric wisdom. They would then return to Greece and share what they had learned. As mathematics became more axiomatic with Euclid, the ability to rigorously build upon existing knowledge in new directions was made easier, and complicated concepts that could not be easily intuited from a few examples became much more viable. What had previously been superhuman mathematical reasoning suddenly became relatively common.
And this is far from the only example of information technology expanding human abilities, even in ancient times. Before the obvious example of literacy, preliterate oral cultures have made near-universal use of song and poetry – the additional regular, repetitive structure makes memorization of large volumes of text much easier. At the very least, these additional constraints can aid in error correction, narrowing the options of which words could come next in case that one’s memory grows hazy.
Poetry allowed the pre-literate Greeks to memorize their immense epics like the Iliad and Odyssey, whose combined length is about half that of the Bible. Chapter 2 of the Iliad even contains the “ship list”, in which Homer autistically lists off detailed statistics for every ship in the entire Greek navy. This is sometimes argued to have functioned as a way for ancient bards to show off their impressive memorization skills even more.
Aboriginal Australians are known to have used songs for navigation, called Songlines. These were songs that contained sequences of directions and landmarks as directions to points of interest, often interwoven with mythology. These could also serve as a form of passport – travelers who could accurately sing native languages and follow the directions were better respected and faced fewer threats in uncommon territory than those who could not. Many Aboriginal languages augment this with absolute direction – you do not have a left foot and a right foot, but an east foot and a west foot, and if you turn 90 degrees you now have a north foot and a south foot. Being so deeply baked into language, one cannot communicate without learning an extremely accurate sense of direction.
Memorization augmentation is not limited to only rhyme and meter, of course. The Method of Loci, or memory palace, has been known for thousands of years. Imagining yourself wandering through a vast palace, and using imagined objects in each room as hints can shift semantic information to spatial memory, where it may face less congestion.
Furthermore, mathematical notation and the manipulation of written symbols developed for algebra and subsequent mathematics was an enormous advance over ancient mathematics – what would have previously required a lengthy paragraph analogizing things to lines, squares, and cubes of various interrelated constructions can instead be replaced by a concise collection of symbols. Transforming those symbols into related equations which may reveal different insights is often a matter of mechanistically following some simple rules and applying some basic pattern matching and reasoning. This is a powerful aid to mathematical thought and for abstracting to difficult problems whose geometric meaning may be difficult to immediately intuit. Difficult leaps in logic can sometimes be replaced by manipulating symbols with simple rules.
Future Human Augmentation
The cerebral cortex is made up of modular brain circuits called cortical columns. Regardless of if one looks at the visual cortex (vision), auditory cortex (hearing), prefrontal cortex (abstract thought and planning), motor cortex (muscle movement), gustatory cortex (taste), or any other cortical region, and regardless of if one looks at humans, rats, cats, dogs, raccoons, or most other mammals, the structure of cortical columns remains shockingly consistent with the same 6-12 layers (depending on how one decides to count them), with consistent cell types and connected to each other in a consistent circuit. There is some variation – aquatic mammals often only have 3 layers and the human visual cortex divides layer IV into a couple extra sublayers – but these are rare exceptions.
This strongly implies that despite there being many patches of the brain with accepted functions, there is in fact very little specialization in the cortex – what makes the visual cortex is not that millions of years of evolution have pre-trained or optimized it for vision, but simply that it gets information from the retina. Experiments have been performed swapping the optical and auditory nerves in fetal ferrets, sending visual information to their auditory cortex and auditory information to their visual cortex, and they learn to see and hear just the same.
The visual cortex that evolved for hunting and navigation can be repurposed for drawing art, for reading, or for operating a computer or smartphone. No natural selection is required at all to rewire it. The human brain is incredibly general-purpose, and will certainly be capable of learning no shortage of future inventions and augmentations. As I have discussed many times before, sensory substitution and sensory augmentation are surprisingly underexplored technologies for their potential – the brain will simply learn to recognize patterns in whatever data you give it, no matter the sensory domain. Writing and data visualization are extremely successful examples of this. There are also many niche examples, such as blind people teaching themselves echolocation.
It is sometimes said that you remember 10% of what you read, 30% of what you hear, 70% of what you do, 90% of what you teach. Other numbers and orderings are thrown around, but the general principle – that passive absorption of information engages memory less than active engagement – is fairly consistent. One observation here is that more active engagement here would also involve a larger number of brain regions.
Whatever it is that you are seeing, hearing, feeling, or thinking, your brain needs some way to encode and represent it. If it lacks the neural coding to currently do so, it will repurpose existing neurons to learn a representation for this new information. Engage more with an idea and your brain will form a deeper representation of it. If that representation is stored entirely in your visual cortex because you only engage by sight, it must compete for space with everything else you engage with visually. If you instead begin engaging other senses, then we can argue from first principles that this model will be spread across other brain regions, potentially regions that are much less congested.
It is also worth noting that literacy has the effect of shifting a very wide range of functions over to the left hemisphere of the brain where the language circuit is, and reorganizing the visual cortex substantially to optimize the recognition of lines and corners. Another piece of personal speculation – since the right hemisphere often takes on more social reasoning roles, the religious tendency to anthropomorphize forces of nature may in fact move related reasoning to the less-congested right hemisphere, which may have enabled more complex reasoning on these subjects. It’s worth noting that one of the greatest cultural influences on early Christianity was Hellenistic Judaism, led by thinkers such as Philo who argued that Jewish theology, if read allegorically, had come to very similar and compatible conclusions on many subjects to the Greeks.
The human cerebral cortex has somewhere on the order of 20 billion neurons and 10-100 trillion synapses. Neurons fire at around 500 Hz and activity is generally rather sparse – only a couple percent of neurons may be firing at a time, though the exact amount can vary substantially over time. Assuming 2% sparsity, we can estimate that the cortex performs somewhere around 1-10 quadrillion operations per second. Each synapse may only store a few bits in terms of its weight, though there is also evidence suggesting that neuronal pattern recognition is timing-sensitive, and that synapses physically migrate up and down dendrites to adjust the timing, suggesting even more degrees of freedom (this may also increase computational power). Nevertheless, the synapse count is high enough that the brain’s storage capacity is well into the tens if not hundreds of terabytes.
This is a very large amount of computational power, and the large numbers involved are perhaps an indicator that in many domains we may not actually be pushing the computational or storage limits of the brain very hard. After all, as previously discussed ancient people found many ways of pushing limits well beyond what we would today expect to be possible. With future techniques, greater assistance from technology, and deeper neuroscience and psychology understanding, it may be possible to push the limits well beyond what we currently understand.
The vast majority of software we run on our computers is laughably poorly written and inefficient. Even code that makes attempts at optimization may often be implemented in languages that are fundamentally sloppy or wasteful with resources. A difference in quantity often results in a difference in kind – while greater computing power enables existing applications to be scaled further, it also enables new applications which were not possible before. Additionally, the most optimized version of a problem is almost never the first to be found, and so something that is just barely possible with a new generation of computers may, in time, be optimized to run on much weaker machines. A modern GPU may be comparable in performance to a supercomputer from 15-20 years ago, but the GPU is certainly far more useful than the supercomputer. There has been more time to figure out what exactly can be done with that much compute, there has been more effort into optimization to make costly tasks more tractable, and new ideas on what to use computers for in general will have been developed. The timescales may be longer, but there is no reason why this cannot apply to the brain just as well. With developments such as writing and advanced mathematics, it largely has occurred.
My personal critique of the Mentats in Dune – humans trained in mental computation to replace banished comptuers – is that the feats of such Mentats are often unimpressive compared to what may actually be possible based on the numbers. The people of Dune apparently understood intelligence well enough to replicate it in a machine, try the technology out for thousands of years, turn against it and banish the technology, and then spend ten thousand years using this deep understanding of the mind to optimize human thought, and still don’t seem capable of operating on more than a few thousand numbers at a time.
Regardless of the imagination of 1960s science fiction, I expect that the next few centuries of human-computer interaction will enable us to not only automate many of the things we use our minds for today as is often assumed, but to further push the limits of what humans can do. I do not however expect any unified such effort, but rather immense fragmentation – humans doing fantastical yet diverse things by pushing their minds very far in very different directions. Perhaps we will be most constrained by our need to interact and interface with each other, requiring that people’s minds don’t become too distant from each other.
We may see mathematics Mentats, engineering Mentats, historical Mentats, even Creative Mentats, and perhaps a wide range of other things well beyond our current conception. This will not only require more deeply understanding the mind and how to push its limits, but also understanding the types of neural computations required to accomplish these tasks, how to shape these through the diverse array of tools humans have historically augmented their minds with, and what new forms computers now enable for both offloading work and for shaping thought.
The early adopters of proper Mentat software may find themselves seemingly superhuman by historical standards. Intellectuals and creatives with seemingly impossible powers, in much the same way that memorizing the entirety of the Iliad may seem superhuman today, but was completely achievable with Bronze age cognitive technology.
And to answer the obligatory question these days - while “AI” technology is neat, I think it’s fundamentally very different from what I’m talking about here. There’s perhaps a role for AI methods to be applied, and there likely are many cases where the ultimate limits of the brain and the limits of other extreme computing systems share some resemblance, but I expect the tech tree for deepening human-computer interaction to be a very different tech tree entirely, and that there are no simple answers to this problem no matter how much AI is involved.