Superconductors and Such
Some thoughts on atoms, bits, and the future of engineering
There’s been some hype over the past week or so around a supposed room-temperature, ambient-temperature superconductor. There are plenty of reasons to be skeptical of it, but if it’s real it’ll be a very big deal. So far though, there have been some independent replications that have shown that it may at least be an exceptionally strongly diamagnetic material, so at the very least the material has some interesting and potentially useful properties even if it isn't a full superconductor.
[Note: as I was editing this, an independent paper backing up its superconducting properties has dropped.]
Bzogramming is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
One interesting property of the claimed material however is that it’s actually fairly simple - just lead, phosphate, and doped with a bit of copper. Compare that to the common approach to superconducting materials, which often are full of exotic and expensive elements. We will very likely know whether or not it’s actually legit within a week or two. There’s already a decent number of people trying to replicate it independently, including a few of private individuals. If we ever get a room-temperature, ambient-pressure superconductor that sees widespread adoption, it will have to be something simple and overlooked like this lead compound; anything too exotic will simply be impossible to produce at scale.
There’s certainly a lot of potential such a material could have, and I’ve seen a number of people claim that it could bring about “another industrial revolution.” That’s a bit of an exaggeration in my opinion, but not too big of one. Much cheaper MRIs would certainly be nice to have, and there's a good chance that the diamagnetic properties that it seems much more likely to have may at least be useful for building maglev trains. With that said, the idea that we *need* an innovation like this before we can start doing big things again sounds a lot like a coping mechanism to me.
America needs to start building stuff again. Stuff made from atoms, not just bits. Software certainly has its place, but it shouldn’t take priority over absolutely everything else to the extent that it has. "Tech companies" used to refer to all kinds of technologies, whereas today the term seems to primarily refer to software companies. Innovation outside of computing seems to have slowed to a crawl over the past few decades. We need to build rockets, factories, robots, planes, trains, automobiles, nuclear reactors, ships, tractors, machine tools, houses, skyscrapers, titanium foundries, and more. Much more. We need to push innovation and boundaries again that involve things other than merely moving electrons around.
The one person who of course can be pointed to who is doing something about this, cliche as he is, is Elon Musk. While there certainly things he does very well with his businesses, I’m inclined to say that any business or engineering brilliance isn't actually the biggest factor to his success.
I’m fairly convinced that the real secret to Elon’s success is not his brilliance in engineering or business - though that likely is a partial factor1 - but rather that software (and perhaps also finance) has overwhelmingly brain-drained every other industry. The people who used to build rockets and airplanes, optimize industrial processes, design complex machines and appliances, and more, are currently all writing code instead. If you simply transfer the top software engineers into other fields, or at least make rockets and electric cars sexy enough to convince young people to pursue those fields instead so they can work at your company, you quickly gain a gigantic talent advantage over everyone else and can rapidly make progress that otherwise wouldn't have materialized for decades if ever at all.
At the end of the day, materials science matters a lot. New materials give us new building blocks to work with and opens up new technologies that weren’t available before. The space of possible materials is vast, but also seemingly rather underexplored. It somehow took until the 1980s for anyone to get the idea to make an alloy by mixing five or more metals in equal quantities, and only in the past decade has this really gotten serious academic attention, in the form of High-Entropy Alloy research. This sounds like the first thing a child would suggest to do upon learning what metal alloys are (just mix them all together!), and it’s now a promising and fairly new field for finding new materials with unique properties. The fact that this wasn’t uncovered through basic experimentation a very long time ago is something that I find shocking.
A lot of the intelligent effort into materials science today seems to go into things like quantum chemistry simulations, I would assume because all the smartest people even adjacent to the field write code all day and when all you have is a hammer everything looks like a nail. This is somewhat promising, but from listening to enough lectures and talks and reading up on the math involved, it seems to me like a poor approach. All the algorithms are at best something absurd like O(n^11), and at worst exponential time. I certainly will be the first person to say that exponential problems are often much less scary than they're made out to be, and that there are a plethora of tactics for making them more tractable, but nevertheless there are some cases where it's just not practical. Maybe I haven't dug enough into the field myself yet, but from what I've seen it seems like there must be better approaches.
There’s certainly value in computational tools; you can visualize and measure properties that would otherwise be invisible in the physical system. Depending on how you architect the system, you might even be able to rewind, fast-forward, or fork and iterate upon the system in ways that just wouldn't be possible in reality. However, you'll always be resorting to approximations - often approximations of approximations of approximations of theories of physics we know aren't actually completely correct. You also have to deal with the fact that reality will always be able to compute precise results faster than our computers ever will. If you lower the resolution of your simulation with weak enough approximations, sometimes you can get something that can run pretty fast, but you'll be losing a lot of detail, and when it comes to material science that detail is often what matters most, hence why all the computational approaches are trying to face these exponential complexity walls.
There is the argument that we can utilize advanced models to better understand and study the principles behind these systems, that if we can build a good enough model in a computer we can use such models to get a deeper understanding of how these systems work to accelerate innovations in general. This has some merit to it, but any system that is fundamentally irreducible below exponential time (as is the case with most quantum systems) will necessarily be impossible to reduce to the kinds of simple explanations we often want. We can simplify some parts, but this will never give us the full picture, and simple trial and error and real-world experimentation will likely always play an important and irreplaceable role.
I can certainly imagine a hypothetical business that simply pays a bunch of people to mix random metals and minerals together all day and test their properties. If done on a large enough scale, it could be very lucrative at creating novel and valuable materials in ways that a company with access to a giant supercomputer and the best quantum chemistry algorithms would likely struggle with. If there is any part of such a business that could benefit from computing, I expect the greatest value would actually be in leveraging some basic Monte-Carlo-style algorithms and statistical tools for more efficiently navigating chemical possibility spaces toward interesting materials.
There certainly may be flaws in such an idea, but we at least need more people thinking about such issues and we should be questioning if the current approaches are actually ideal. Many of the existing areas of material science seem to love focusing on exotic crystals made with rare earth metals and other exotic things that would never work at scale. Either that, or organic chemistry labs that try and almost always fail to overcome biology's 4-billion-year headstart and copy biology with little to no understanding of how such biological systems actually work. A lab focused on high-throughput experimentation would likely be one biased toward materials that are cheap and easy to produce, which is not a bad bias to have!
This brain-drain situation with software is something that I expect isn’t going to last forever. In fact, I expect things will start to seriously change by the end of the decade.
While perhaps many tech companies are happy to tell you that business is going great, their actions seem to tell a different story. Facebook/Meta, which has all its eggs in just one basket with 98% of its revenue coming solely from advertising, has been seemingly desperate to jump ship to something else for a few years. First they tried to start their own cryptocurrency with Libra/Diem, and more recently they burned $40 billion trying to build a “metaverse” that not even their own employees wanted anything to do with, while much of the public considers the technology an abomination.
Twitter/X is now trying to diversify away from ads and pivot toward more of a subscription model while its new owner is heralding it as a future WeChat-style superapp and financial giant.
Google/Alphabet is meanwhile commiting fraud, taking actions to fight ad blockers, and taking other unpopular tactics such as threatening to delete old gmail accounts and then walking back after public outcry and claiming that old Youtube videos won’t be deleted - yet. Meanwhile with the most recent quarter being an exception, Youtube (which has never been profitable to begin with) has been seeing multiple consecutive quarters of declining ad revenue. Keep in mind that out of the plethora of new video streaming services that have popped up in the past few years, not a single one of these new players has done anything but hemmorage money, and that’s without having to maintain what is likely a multi-exabyte video library that never shrinks.
While perhaps it’s difficult to say for certain that advertizing isn’t as sustainable of a business model as it seems, and that a twenty-year adtech bubble may be about to burst, following the old idiom that "actions speak louder than words" provides a strong hint that things aren’t as good as they appear.
I’ve personally begun backing up terabytes of Youtube videos as a side project.
It’s not just an adtech crisis that could cause a problem in the next few years. With Moore’s Law rapidly grinding to a halt, companies will no longer get exponentially growing subsidies from hardware improvements. If throwing 3x more compute at a problem can get your company 1.5x more revenue, growing this way is really only sustainable if you can somehow double your compute/$ in the time it takes to build out that compute. This works today, but will very soon stop working. I expect AI companies to be hit especially hard by this.
This is also true to some extent for storage costs, of course. Most of our storage technologies are starting to run out of steam; the NAND flash in your SSD scales the same as your CPU and will level off around the same time, RAM stopped scaling a decade ago and the only viable technology to replace it failed in the market, and mechanical harddrives are closing in on theoretical limits of how much data can be stored per square inch of magnetic material. The only storage technology that isn't yet close to its theoretical limits would be tape drives, but unless companies decide to tell their customers to deal with 4-hour waiting times to access data not in a cache, or perhaps switch to a crazy business model such as renting out tape drives, the exponentially growing storage capacities of datacenters will soon become prohibitively expensive to maintain.
Chip companies also are going to have a hard time with Moore's Law slowing down, though I expect that to be a slower process. The past few decades has created an environment where real innovation has been discouraged by the market; any time you spend making a CPU with a more efficient architecture is time that you're not spending porting last year's architecture to run on smaller transistors. Throw in compilers that up until recently struggled to translate code to unusual architectures, as well as the fact that our modern notion of "large volumes of open source software freely available on the internet" is a relatively recent phenomena, and you have a strong incentive to stick to the status quo as opposed to innovate. Remove these factors and suddenly innovation into exotic architectures is incentivized again, and backwards-compatibility matters a lot less. We're alrady seeing this to a large extent with GPUs and AI accelerators, but I expect it will come for CPUs soon. There are ways of making CPUs orders of magnitude more efficient, even with tiny engineering teams.
Someone soon is going to build a CPU dramatically more powerful and more efficient than those that Intel, AMD or ARM produce. The biggest opportunities for this will involve exotic things like tiled architectures, which are still general-purpose but will necessitate rewriting a lot of code to optimally use the physical layout of data and compute on the chip. There are also other technical reasons why tiled architectures may completely change the way that context switches work, which could eliminate the factors that have caused operating systems to accumulate millions of lines of driver code, and making it possible again for even hobbyists to build new and competitive operating systems from scratch again. As these incompatible architectures grow in popularity, it's possible that many companies with large codebases designed around obsolete hardware will need to be at the very least heavily adapted, turning many large codebases from massive assets to massive liabilities.
Software certainly isn't going to collapse entirely, but I expect that the next decade or two will involve a lot of turmoil, downscaling, bursting bubbles, major bankruptcies, and new players. Many business models and even entire products and services that seem viable today may turn out to be unsustainable long-term, especially the companies that provide massive, resource-intensive services for free. Companies will likely have to get a lot more lean, and the exceptionally high wages for relatively little work will likely go pretty quick.
Return to Atoms
Earlier, I called the excessive focus on superconductors a coping mechanism. My reasoning behind this statement is that there are many promising technologies, including many at the intersection of computing and other forms of engineering, that may be very valuable. The real bottleneck preventing innovation outside of computing likely has very little to do with a lack of breakthroughs, but a lack of sufficient talent.
I’ll list a couple examples of technologies I expect have a lot of potential, though I expect there’s a lot more out there than I know of.
One technology I expect is going to play a major role in the future is hyperspectral imaging. The basic premise is that, by passing light through a prism or similar optical system, you can separate out the individual component frequencies that make up that light and get significantly higher color resolution. Rather than simply having red, green, and blue color channels, you instead distinguish between hundreds of different, narrower color channels.
You are effectively getting a full spectrogram for every pixel, which communicates a tremendous amount of information about the materials and chemistry going on in that pixel. Two different materials may look like identical shades of red to the human eye, but a hyperspectral camera may inform you that they are reflecting or emitting slightly different frequencies of red light, with these spectral acting like fingerprints that can identify these materials.
If there is any technology that could funnel some of the past few decades of innovation in bits into innovation in atoms again, this is something I’d consider a good candidate.
A lot of the existing work that is going into hyperspectral cameras often involves attaching them to drones. Flying the drone over a field of crops can tell you detailed information about the health of each plant (e.g., “these plants here have spots in this narrow frequency band that the human eye can’t tell apart from any other green light, that suggests they have a magnesium deficiency”), and flying them over earth can tell you of the minerals in the ground, enabling easier surveying for mining.
The technology is also used in the food industry to rapidly discern healthy from unhealthy produce; there may be color differences indistinguishable to the human eye that can indicate mold or insect infestations, allowing for machines to automatically filter out bad produce with high accuracy.
Right now the biggest things that seem to be holding the technology back are software and cost. Software tools for analysis and compression of hyperspectral images are widely considered to be poor in quality, and it’s likely that some small teams of clever software engineers could push the field much further along.
In terms of cost, hyperspectral cameras are often tens of thousands of dollars. I have no idea why they are much more expensive than a phone camera; they’re little more than an ordinary camera behind a glass prism and some other simple optics. My best guess is that the market right now is too small to support companies without making large sums of money from individual sales. With a good marketing campaign to make people aware that the technology exists and what it’s capable of, I could definitely see a “Raspberry Pi of hyperspectral cameras” company being enormous and sparking a lot of innovation.
Geometric Folding Algorithms
The mathematics behind origami and similar folding-based techniques has seen a lot of attention and innovation in the past few decades. Algorithms for designing and engineering complex machines based on such approaches have been developed, and these mechanisms seem to have a lot of advantages.
This is a space that sees a lot of research attention, and occasionally some attention from pop science, but hasn’t quite broken out into many applications yet.
Auto Industry Business Models
This is less of a material technology and more of a social technology, a matter of the people and ideas that surround and support a technology as opposed the physical gadget itself.
Software today seems to largely circle around a small range of business strategies that work well for internet and app companies, but work less well for others. There are a lot of VCs who are very familiar with such business strategies and likely more money available for your company if you use them compared to if you don’t, but that doesn’t mean that they’re a good fit everywhere.
For example, the strategy of “acquire lots of customers at a loss and figure out profitability later“ can easily fail with physical products, especially if your “daily active users” only make one purchase every few years.
I find that the early car industry seemed full of creative business models and business tactics that seem well outside of the toolbox of modern entrepreneurs. After all, cars are a very difficult sell when road networks are minor and poorly maintained, streets are full of horseshoe nails that pop your tires, and where everything is built around the assumption that no one can travel very far in a day, and so there is no reason to travel far often.
Many early auto companies were vertically integrated to a degree not seen today. Valvoline did (and still does) everything from producing the various fluids that go into cars to running the service centers that actually put them in your car. Michelin famously created the entire modern restaurant industry just to sell tires, and even then that’s just the very tip of the icerberg when it comes to their tactics. Entire books have been written about their tactics, which involved everything from printing educational magazines to teach people how to properly install tires to funding and marketing pro-natalism organizations to prevent a long-term decline in the French economy.
If we want innovation outside of software again, we’re going to need to relearn how to run successful businesses that aren’t merely software companies, and that is a much deeper and more complex problem than most people realize.
Hey, it’s been a while since the last article. This was my primary income source for a few months between jobs, but early this year I started a new job doing some really exciting work. It’s not quite ready so I’ll refrain from making major announcements here yet, but I’m excited and things are going well.
The work I’m doing there is also generating some very fascinating new ideas that are likely to make it into some really great articles here.
That did start to take up an increasing amount of my time, and eventually the weekly publishing schedule here became unsustainable. I broke out of my regular rhythm of staying up later and later every Sunday night / Monday morning. I’ve had a number of articles I’ve been working on since then, but nothing I’ve gotten around to publishing.
I definitely plan on writing more here soon, though perhaps at a slower or less regular rate.
The holy grail of rocket engine design for decades has been the full-flow staged combustion engine; NASA attempted to build it and eventually gave up because it was too difficult. The Soviets built one, but never flew it. SpaceX has not only successfully built this impressive engine, but is mass-producing such engines now, with every one of the 39 engines on their Starship rocket being this near-impossible holy grail design.