What can neuroscience tell us about engineering?
This might seem like an odd question, whose answer is perhaps “not much”, but it’s worth highlighting the strong human component to engineering. It’s humans that do the engineering after all.
When watching a video on a poor connection, often not enough data can get to your screen to display the video correctly. What happens then? Well, the video you’re shown is not the original video.
Okay, but there’s practically an infinite number of ways in which the image on the screen can differ from what was supposed to be on the screen. The question we should really be asking is “Why was this particular image shown as opposed to any of the infinite others?”
Exactly how the video you’re shown differs from the original is very dependent on the chain of systems involved in getting that video in front of you. An old analog CRT TV is going to produce very different visual artifacts and defects in the presence of noise than a modern digital TV with its digital encoding, checksums, error correcting codes, and data compression.
When an old analog CRT TV faces interference, you get static, visible horizontal bands, and waves across the screen - the result of noisy electromagnetic waves interfering with the transmission, then the combined signal incorrectly driving the scanlines that are zigzagging across the screen. When a modern digital video is interrupted, the image often becomes covered in blurry rectangles, as the higher-resolution chunks of the image are deemed too corrupt to display, and a lower-resolution fallback is displayed instead.
Even if many of the steps involved in getting us our video are a black box to us for practical purposes, there’s still a large extent to which these kinds of failures can hint at what’s inside. A little knowledge of the the basic mechanisms that make up this system can give us an even deeper understanding, even if we’re not told exactly how all those pieces are put together.
Why not apply this approach to the brain?
A great neuroscience paper from 2017 is titled “Could a Neuroscientist Understand a Microprocessor?” The article asks if the techniques used by modern neuroscience to understand the brain - techniques commonly believed to just be bottlenecked by data - are actually effective at all. It takes a 6502 CPU - an old, simple, well-studied CPU and attempts to use many classic and modern neuroscience tools such as connectomics and lesion experiments to understand how it works. Despite having every piece of information necessary to emulate the CPU perfectly, they conclude that such tools are largely inadequate for explaining much of a CPU, and likely are equally inadequate for explaining the brain.
So we know that the tools used in modern neuroscience are not adequate, and new techniques and perspectives are necessary. So given a novel approach, why not see where it goes?
Navigation in the Brain
The brain evolved for navigation. We certainly use it for many much more sophisticated things than merely finding our way around 3D spaces, but navigation is a good place to start.
Some of the basic types of neurons that are known to show up in the brain are grid cells, place cells, and head direction cells. In fact, these make up the majority of research on the subject of how the brain navigates spaces. While there are a few other types of cells that have been observed being used in navigation, these three seem to be the brain's navigational bread and butter.
Grid cells map out a hexagonal coordinate space across 2D environments, encoding the brain’s equivalent of (X, Y) spatial coordinates. It is believed that grid cells are used in path integration - the ability to combine the current position with a sequence of planned actions to predict the end location.
Place cells fire when the animal is in specific, known locations. Think like landmarks. There is reason to believe that these integrate closely with grid cells to create associative links between locations in the grid cell coordinate space and sensory/conceptual signals.
Head direction cells encode that the direction the animal’s head is facing.
For now, I won’t go into fine computational details on exactly how these different neuron types actually work or encode their data (I’ll save that for a future installment of my Neural Computing series), but we can at least conclude from this that the most common primitives that the brain uses for navigation involve some combination of coordinates, orientation, and landmarks.
From this, I think we can construct a plausible model of how humans and other animals navigate to locations. And that model is actually pretty close to the classic A* algorithm.
If the goal is in sight, the algorithm is to simply face in the direction of the goal and walk toward it. Any obstacles in the way can be navigated around using path integration, normally in an attempt to quickly bring the goal back into view.
In cases where the end goal is not within sight, some amount of memorization of the environment is necessary. This memorized model is pretty easy to build in an environment in which a creature spends a lot of time. Common, easily recognized landmarks can be memorized using place cells, and a basic graph could be memorized; e.g., go left from the big rock and you’ll eventually see the weird tree, or go right from the big rock and you’ll find the river, etc.
The brain also seems pretty good at memorizing sequences, so memorizing big black rock, river, river bend, big brown rock, etc. is not too hard.
For comparison, think about how humans navigate. If you’re in a room and want to walk toward something, you obviously just face the object and walk toward it. More complex navigation problems usually involve some memorization, either from familiarity with the environment or from someone giving directions.
The “giving directions” part is not particularly common outside of humans and relies on some amount of high-bandwidth communication, however think for a moment how the mechanics of this works. Giving directions usually boils down to a sequence of “turn left at landmark X, go straight until you reach landmark Y, then turn right,” etc.
Turning to the US Interstate system as an example - while this is not a natural system, it is optimized for ease of use. “Ease of use” in this case boils down to whatever we have learned from experience this black box that is the human brain’s navigation system naturally finds easy. Navigating the US is pretty easy with the Interstate system; odd/even highway numbers tell you if you’re going North/South/East/West, major highways often intersect in or near big cities (recognizable landmarks, even if the recognition is mostly from reading signs), and street signs can guide you the rest of the way to pretty much any major city or attraction you wish to visit.
Conceptual Navigation
One surprising detail about the brain, particularly the cortex, is its regularity.
The cortex is the large, wrinkled part on top of the brain, which is responsible for a very wide range of functions. Often it is portrayed as being divided into many smaller, specialized regions - visual cortex, motor cortex, auditory cortex, prefrontal cortex, etc. While there is some truth to this, it paints a much more rigid picture of the cortex compared to reality.
The cortex is made up of modular circuits called cortical columns. Regardless of which part of the cortex you look at, and what function it serves, there is very little variation across the cortex in terms of the structure of these columns. Columns pretty much universally have the same set of layers, the same cell types in each layer, and the same connectivity between layers. There is some variation - for example, primates have a couple extra sub-layers in layer IV of the primary visual cortex that appear to briefly process color and luminosity information separately. Overall though, this is a minor detail.
What actually determines the function of a cortical column is not anything intrinsic to its wiring or any kind of evolutionary hardcoding, but merely whatever inputs and outputs it finds itself connected to. The auditory cortex learns to hear because it's connected to the auditory nerve bringing in signals from the cochlea in the inner ear. The visual cortex learns to see because it's connected to the optic nerve bringing in signals from the retina. Surgically swap the auditory and optical nerves, and these two regions swap functions. Damage the optic nerve and the visual cortex doesn't go dark - every cortical region is to some extent connected to every other, and those other connections just become the dominant, driving input. Your visual cortex may change its career and learn to read braille.
This uniformity has a rather profound consequence; if even one part of the cortex happens to contain any of the navigation cells we discussed before, that means every part of the cortex has that capability as well. In which case, this implies that either these navigation cells are the foundation of the brain’s general-purpose reasoning circuitry, or that this navigation logic is a special case of a more general system.
There’s good scientific reason to believe this is the case, and even some theoretical work on how general-purpose reasoning can be performed with grid cells. There’s also the fact however that a near-universal occurance in human cultures is making metaphors out of navigation - right and wrong “paths through life”, associations with left being bad and right being good, and similar ideas are very common across many cultures throughout the world.
Engineering as Navigation: The Holy Grail Phenomenon
As mentioned before, we can get a good idea of how a black box works through some context about what its fundamental parts are (the above neuroscience background) combined with some observations on what happens when it is pushed to its limits and begins to fail. The analysis of a failure case is this section, and the particular failure I’d like to explore is the concept of a “Holy Grail” in engineering.
We can think of engineering as navigating a space. We start out in a world where a certain problem is unsolved, and we are trying to navigate to a world where it is solved.
We form some analogue to grid cells that encodes our understanding of our position in this space, and that can use to integrate over a sequence of actions; for each position in our model, and each possible action we can take at that position, we can infer the most likely position we’ll end up at after the action is taken. We can also follow a sequence of actions and infer where we might wind up.
These “positions” largely don’t correspond to real-world positions, though there’s perhaps some geometric aspect involved with the layout of components, etc. Rather, this is largely a combinatorial space, complete with all the associated exponential scaling factors.
However, we certainly aren’t born with such models; we learn them through study and experience. When first discovering some new engineering field or subfield, very little is known about its structure, and this must be mapped out through experience. The models start out very simplistic and wrong, often with many unknowns and false assumptions, but gradually are refined into something fairly reliable.
Then suddenly, a new technique is discovered or a new technology invented. This usually takes the form of either something that people had wanted to do for a long time that no one could figure out how to achieve (e.g., the airplane) or something that offers a solution to an obscure problem few had considered that in retrospect is deemed important (e.g., nuclear fission, blockchains).
This new technology is like a new degree of freedom, a new basis vector, a new dimension. Extrapolation is an obvious response.
If a single Uranium bomb contains enough energy to flatten an entire city, what kinds of technologies would be possible if we power everything using small Uranium batteries?
If blockchains can let us solve some network consensus problems that were previously believed impossible, while also having a few other interesting properties, what if we try to import all of the world’s bureacracy onto it?
In our navigation analogy, this response often involved imagining that the new technology is simple and fairly linear - that if in simple cases the technology can solve a small problem, that we can naively scale the technology to infinity with few concerns. This is of course an understandable result of a frontier, but is of course overly optimistic.
This corresponds to attempting to navigate through a space with relatively few obstacles. A* works fantastic in such cases - you can largely just walk in a straight line directly to your path!
In reality however, there are often many obstacles ahead, many complex and nonlinear dynamics, many harsh tradeoffs and other factors that make engineering hard. The pathfinding is anything but easy and success is not guaranteed.
Nuclear fission produces harmful radiation that is difficult to shield against without thick layers of concrete, lead, or water. You can build a power plant using it, sure, but a handheld device or a car is a dangerous idea.
Blockchains have harsh scaling limitations and a very complex space of difficult tradeoffs. Most simple solutions humans imagine are simply not possible without sacrificing some of the properties that people refuse to give up. Further, some of these properties create problems of their own such as privacy concerns that are very difficult to engineer away.
Now factor in social dynamics.
Humans communicate narratives, explaining the landmarks, obstacles, and such in the way of this straight path toward this ideal solution. The same path becomes well-trodden, memetic, with many engineers throwing their effort at massive obstacles that seem impossible to pass, often taking few variations on the same widely-spread path. Of course, without a convincing argument as to why the goal is unreachable, or at least why this technology is not a viable path to getting there, vast resources will often continue to be spent until people simply give up.
Beyond this basic model of Holy Grails, there are a few other wrinkles we can highlight:
It’s also worth noting the kinds of solutions that humans often consider ideal; these often take the form of “99% or 100% solutions” that cleanly solve problems that were previously impossible or extremely difficult in ways that fit neatly into our existing models of the world. Only on extraordinarily rare and simple occasions does the universe actually conform to such fantasies, and most solutions are much more difficult. Simply put, we want short, simple paths rather than long, windy ones.
This underestimation of obstacles is also an obvious contributor to the well-known phenomenon of engineering projects taking far longer than planned; "just build X, then Y, then Z, it can’t be that hard, right?”
Sometimes this ambitious target is acheivable, but sometimes it’s too ambitious for its own good and turns out to be a mere pipe dream. Many imagined technologies - especially the kinds that resemble a panacea, are really just things like perpetual motion machines and the like obfuscated enough to even confuse the person proposing it.
Keep in mind that for a while, many very intelligent people were convinced that mathematics was provably consistent and complete - the famous mathematician David Hilbert even declared “Wir müssen wissen, wir werden wissen” (We must know, we will know), referring to this exact problem. He unwittingly did this the day after Kurt Gödel proved such a goal impossible. Regardless, the ambitious if hubristic declaration made it onto his tombstone:
With that said, you never know that an obstacle is there until you discover it, and those who overestimate the number of obstacles tend to be less successful than those that underestimate the obstacles enough to consider trying a worthy use of their time.
An Alternative Path
A good question to ask in response to this is whether there is any alternate approach we can take to more intelligently navigate technological frontiers. After all, large numbers of people taking mostly largely the same path straight into a brick wall is a big waste of valuable engineering talent!
I don’t have an answer to this that I find completely satisfying, but I’d at least like to end this on some kind of positive note.
A friend of mine - Alexander Nicholi - has a dichotomy of human engineering strategies that he calls Functionalism and Mechanicalism. I have my nitpics of his ideas here, but I think he’s largely working in a fruitful direction.
In brief, he draws out two main styles of computer engineering - Functionalism and Mechanicalism.
Functionalism he describes as engineering by means of towers of abstractions, building new things through simple combinations of previously-build components.
Mechanicalism is more of a bottom-up exploration of what the most basic low-level capabilities of a machine are and a slow, methodical engineering with a heavy emphasis on the concrete reality of what is going on.
Alex is generally fairly critical of Functionalism for being overused, and argues its flaws are the source of many problems in modern engineering, and that Mechanicalism deserves more attention.
To me, Functionalism seems similar to engineering by means of this A*-pathfinding Holy Grail mechanism, whereas Mechanicalism is closer to a breadth-first search approach to engineering. Breadth-first search is of course slower, but with the advantage that it is not biased by a heuristic that can skew its results toward a suboptimal solution.
I don’t want to draw too close of a comparison here - these concepts aren’t exactly a 1:1 mapping, and I have some other ideas I can write about another time that I think map much closer to his - but I think there’s merit to consciously attempting to take a different approach to engineering.
It’s worth noting however that these engineering spaces are hyperbolic in nature; they usually get exponentially large very quickly. The consequence of this is that there are simply a massive number of simple combinations of things, many more than our intuition tends to suggest, and many more than we have any hope of ever fully understanding.
Oftentimes, the most interesting new technologies are combinations of old technologies in clever and unique ways. If we’re going to brute force our way through engineering problems, perhaps some of it would be wisely spent experimenting with combining a variety of known technologies in unique ways rather than pathfinding toward some Holy Grail that might not even exist.
Thank you for reading this Bzogramming article. I try my best to approach problems in unique ways and find odd connections that others may not easily see. If you’re new here and are interested in seeing more of this, feel free to share, subscribe, and read more of my writing.
The support I’ve been getting these past few weeks continues to surprise me. Thank you.