"C* Does Not Have a Type System"
The truth in a friend's mad ramblings
My friend Alexander has his own programming language called C*. It can perhaps be thought of as a much more conservative version of Bzo, a kind of C dialect with a constraint system layered on top called Law and Order that resembles a set of features I have planned for Bzo. In fact, our friendship started with me reaching out to him to chat after I read about C* and noticed some overlap in ideas.
Alex also is prone to spicy opinions and tweets. One claim being that “C* has no type system”.
The claim, objectively speaking, is false. C* inherits the weak, statically typed system that C has, albeit with something vaguely resembling functional-style dependent types layered on top.
The merit to this false claim however is that Alex chooses not to phrase things in familiar type theoretic terms. Rather than building a fantastic new type system, Alex aims to build something that perhaps resembles a type system on the surface, but subverts many common assumptions below the surface. The programming language equivalent of building a car rather than breeding a faster horse.
In other words, Alex is not asking “how do I build a better type system”, but instead “how do I better solve the problems that type systems are normally employed to solve?” There’s a big difference.
In a similar respect, while I commonly refer to Bzo as a programming language for the sake of having a convenient framing for explaining it to people, my actual vision and mode of thinking of it is more like “Bzo is not a programming language”.
The concept of a programming language, and the wide range of assumptions that are made about how a modern programming language should work, hold us back. We descend deeper and deeper into a local minima while the notion that this might not be the global minima is utterly unthinkable.
No other engineering discipline sweeps so much complexity under the rug with abstractions as we do. Sure, not all engineered systems are Turing-complete, but CPUs sure are and much of low-level layout of chips is still done by hand because performance matters and humans still beat out even AI-based tools in some cases. The extreme complexity of modern computing and the fragility of systems based on importing a trillion-and-one dependencies certainly isn’t helping. Is ignoring complexity really the best option, or might we be better off building better tools for managing it?
Moore’s Law is slowing down, and will not be available for us to exploit for much longer. Hardware is getting progressively stranger, which will be reflected in the code we write on it. Our theoretical understanding of the nature of computation is getting much more bizarre and sophisticated, even if most of CS has yet to catch up. The most dominant computing devices today don't have keyboards, but are based on touch screens and voice controls.
Are we really prepared for the future? Do we really expect anything resembling modern programming languages to be around in 1000 years, let alone 100?
Thank you for reading this brief Bzogramming article.