In formal logic consistency is defined as follows:
A system is consistent if there exists at least one statement that cannot be proven.
This definition uses the “principle of explosion”. Once a single contradiction is proven, the entire logic explodes, you can use that to prove any other statement. When everything is provable the system becomes trivial and useless.
In type theory there is an equivalent definition using the Curry-Howard correspondence:
A type theory is consistent if and only if at least one type is uninhabited.
There are many variations of type theory, the simplest possible type theory is what I will refer to as naive type theory.
In naive type theory, there exists a single universe, Type, which contains all types, including itself. That is, every type is an element of Type, and Type: Type.
This system suffers from the same fundamental inconsistencies as naive set theory.
We can define a version of Russel’s paradox and use it to derive a contradiction, collapsing the entire system.
In the context of type theory, the most well known paradox using this concept is called Girard’s paradox.
The most straightforward method to ensure consistency is strong normalization: the requirement that every term reduces to a normal form. Since the empty type cannot be inhabited by a normal form strong normalization guarantees that no term can inhabit it at all.
Ensuring strong normalization typically involves a combination of structural constraints and explicit verification. One is to restrict the system to a safe subset by using structures like inductive types and type universes. The second is to allow total freedom but require the user to provide proof of termination for every function.
While these approaches are effective, they are often inconvenient in practice and introduce significant complexity to the system. There exists a completely different approach to deal with the inconsistencies of naive type theory. The only reason naive type theory is inconsistent is because of how we interpret the type theory as logic using the Curry-Howard correspondence. Rather than trying to modify the type theory itself to ensure consistency, we also have the option of changing the correspondence. Specifically, I propose a weaker definition of consistency:
A type theory is consistent if and only if at least one type is not inhabited by a normal form.
Intuitively we can think of normal forms as the proofs and other terms as a process that will reduce to a unique value and therefore it can used as a representation of that value. Since non-normalizing terms have no normal form, they do not represent a real value. Therefore if such a term inhabits a type we just say it does not count as a proof. Having these terms directly inside the system itself, is still useful because you can use it to proof things about infinity rather then using these terms directly.
Infinity
The existence of infinity has long been a topic of philosophical discussion. Some argue that infinity does not exist at all while others claim it is just a normal mathematical object like any other. Lambda calculus gives us the best of both worlds. Infinity does not exist (There is no normal form of an infinite object), but it can still be represented by a term and therefore be reasoned about.
In practice, The only difference between standard systems and this system is that type checking is no longer enough to verify a proof. The proof term itself needs to be normalized as well to count as a valid proof. Since we allow non-normalizing terms this means the type checker becomes undecidable.
Because of the halting problem, there are terms for which we cannot determine whether normalize. All we can do is attempt to normalize them and hope they eventually reach a normal form. This is a valid strategy, which means that having an undecidable type checker can actually be an advantage for these types of problems.
Paraconsistency
You might have noticed that this system is not actually consistent but rather paraconsistent. Whenever a contradiction takes place it prevents the principle of explosion by reducing to a non-normalizing term making sure the type checker does not terminate rather than give a false positive.
At first sight, just the existence of a contradiction, even if it does not blow up, seems problematic.
For some P both P and P -> False can be inhabited by a normal form at the same.
This means having proof of P -> False is no longer enough to conclude that P has no normal form.
Law of double self application
If we consider the simplest non-normalizing term (λx. x x) (λx. x x), we observe that it consists of two identical lambdas applied to one another, each making use self-application.
It turns out this is the core idea behind all non-normalizing terms. All non-normalizing terms consist of two lambdas applied to each other that both contain self application.
Hence, if there exists proof of P -> False that not involve self application, then P must be uninhabited.
If it does use self application, a potential proof of P must use self application as well.
Most normal forms used as proof do not contain self application at all, so they remain valid in classical logic.
Foundational system
The three pillars of any foundational system are consistency, minimalism and expressiveness. Standard type theories are consistent, but have to make a trade off between minimalism and expressiveness because they enforce strong normalization. This is equivalent to the halting problem, any restriction on normalization will be overly conservative.
The new system is paraconsistent, this is the only downside but it might not matter that much in practice. Since we do not require strong normalization, it is maximally expressive while remaining extremely minimal. The only question remaining is if a fix-point combinator can be defined, and if so, whether it is simple enough to be practical. I am highly confident such a term can be found, but it remains an open question for now.