I consider myself an all-rounder in work and in life. I am married to an amazing woman who says I'm the funniest guy she knows. Though likely true, I'm aware she says this just to prop me up. We live in Denver with our three dogs.
I have a unique background in that I've worked many different jobs — from vineyards in New Zealand, construction in Austria, to software startups. I value my friendships deeply and have been called fiercely loyal, which I think could be seen as positive or negative at times.
I am a friendly person and can get along with anyone, but I have little patience for excess jargon and hand-wavey actions. To me, the truth is precious, and I love pursuing it. At the end of the day, I simply want to be the best version of myself, do good work that matters, and be around highly performant people who want the same.
A speculative but reasoned thought experiment exploring how manifolds emerge naturally from interaction space and multi-dimensional systems, using complex numbers, Clifford algebra, and loop quantum theory as a framework — and what this implies for emergent surfaces, goal surfaces, and the nature of intelligence.
A foundational idea in the next few of my pieces is the idea that manifolds naturally emerge from the "interaction space" of multi dimensional systems. This conjecture arose from meditation on the nature of intelligence. So, I would like to treat this idea as more of a thought experiment, as I'm fully aware of its speculative nature. There may be some mathematical rigor by which to prove or, more likely, disprove this idea, but for now it is just a thought experiment that I have found to be reasonable and useful for my own thinking.
The idea of emergent behaviors in our reality is no new concept. Though, it seems to be appearing more frequently in popular conversations of math and science, or so I biasedly observe. Leonard Susskind, a theoretical physicist, proposes that our reality is an emergent holographic surface. Reality is a projection of sorts from an aggregation of lower dimensional entities. This is a very interesting idea, and it is one that I find to be quite reasonable. It is also one that I find to be quite useful for my own thinking about the nature of reality and intelligence.
Now what is an example of an emergent behavior? Thus far it seems ambiguous and ill-defined. So let's look at "wetness" and "temperature." Water molecules (H₂O) are not wet. A single molecule has no wetness — it has charge, polarity, mass, and the ability to hydrogen bond. That's it. Wetness doesn't exist at the molecular level in any meaningful way. But get a few trillion of them together, and suddenly you have a liquid that clings to surfaces, has surface tension, and produces the sensation of wetness. None of those properties were "in" any individual molecule. They emerged purely from the interaction of many molecules at scale. Temperature is the same — a statistical property of collective kinetic energy, not a property of any single molecule. I would argue each of these is mathematically representable as a manifold — I'll use manifold and surface interchangeably.
Theoretical physicists have made suggestions that the foundational aspects of our human-experienced reality are emergent behaviors. Space and time, the "fabric," seem to be emergent from lower order quantum properties. I think it's not unreasonable that gravity and energy will turn out to be emergent behaviors as well. Any proven quantum theory of gravity would indicate that gravitational forces we experience are emergent from an aggregation of quantum gravitational entities. This thought I find very intriguing. If true, it means that characteristics of our reality that we as humans have taken traditionally as foundational and immutable, are actually more akin to second order properties.
Continuing on emergence, I'll add that an emergent property only exists because of the observation of it. Water is wet when interacting with something else; the rock is wet because of the interaction with water. Standing water not interacting is not wet. Temperature isn't a thing until an exchange with another entity at a different temperature. The gradient implies interaction and observation. This is an important part of my argument; this "interaction space." Thus, I will sometimes refer to observation and interaction interchangeably, as they are deeply connected.
You can observe the flight of a baseball by catching it mid flight. That is a very direct observation – violent one for the baseball. But also, to observe something, even in the least invasive way, is to interact with it. Viewing the flight of a baseball with your eyes, you're still interacting with the light that bounces off it which changes its state, even if it is the most minute change.
So, here I'm speculating that foundational mathematical principles of hypercomplex numbers can be used to describe "interaction space", emergent behaviors that drive our reality, and the nature of intelligence. In a later piece I will attempt to connect intelligence as a higher order expression of the primitives of foundational physics. Here I'm simply laying some groundwork for that thesis hopefully without too many groans from practicing mathematicians and physicists.
Clifford Algebra and the Two Dimensional Algebras describe naturally emergent dimensionality. i^2 = -1, j^2 = 1; j ≠ 1, and ε^2 = 0, and ε ≠ 0. The i, j, and ε variables are also referred to as k-values or generators. These I will argue make up the foundations of interaction space which I will touch on later in this piece.
These generators have multiple interesting characteristics, but we'll focus for now on their ability to introduce a second dimension. Take a real number, say the number 5. Alone, it is simply a stretching of the number line. With algebra, you can stretch it or shrink it with addition, subtraction, multiplication. You can rotate it 180 degrees by multiplying with negative numbers. Yet any mathematical operation taken on it with another real number keeps it in the first dimension. With a generator like i, a new dimension can be introduced without adding a new variable. Adding an imaginary number, imaginary being a misnomer for what it actually is, to a real one like in z = x + iy results in a rotation 90 degrees out of the first dimension, off the 1D number line, and into a second, creating the complex plane. A similar result happens with split complex (j^2=+1; j) and dual numbers (ε^2=0;ε≠0), a second dimension emerges upon introducing these generators to a real number, albeit the emerging planes have different geometries that we won't unpack here.
The natural rotation created by imaginary numbers is what can be used in the definition of a circle. A unit circle is defined as r*e^(iθ), where r is the radius and θ is the angle. The e^(iθ) term represents a rotation in the complex plane, and it is this rotation that creates the circular geometry. If you fix the radius, r, then you rotate around a circle as theta changes. This has profound implications. If you take "interaction" to be an algebraic operation of multiplication, it forms a clearer picture of the importance of these hypercomplex numbers.
These hypercomplex numbers are non-real mathematical entities, until interaction, k*k or k^2; you can see them as "latent potential," a dimension of possible reality until an interaction where they collapse into a real value: -1,0,+1 (rotation, shear, or hyperbola.) It is this characteristic that I’m heavily leaning on for this speculative leaning on. With a little imagination, it maps cleanly though. Something that is non-real, a latent substrate, becomes real when it interacts with another bit of substrate.
There are other elements that arise from two-dimensional algebras: quaternions, octonions, with their own unique properties. It is not unreasonable to think that these higher dimensional elements of k-math could be a way to more richly describe the complexity of reality. In fact, the standard model has been proven to fall out of the gauge theory that is built from Clifford algebra.
In physics, loop quantum theories describe the universe as a network of interactions, a "graph" model. Interaction defines reality — the edge or connection between each node is an observable of reality. There is no reality without interaction. A photon is not a thing until it interacts with another quantum entity. In this way you can see why quantum entanglement seems to send information faster than the speed of light. At a lower, quantum dimension of this graph ledger, time and space don't exist in the same way we humans experience it. There is only interaction: nodes and edges. The universe is built from an aggregation of "quantum proofs of work" – a blockchain-like, Planck-scale ledger. Time and space thus emerge as a consequence of the dimensionality of this interaction space. Time doesn't exist in seconds at this layer, but rather in "tick space" – each tick creating a new state of the graph of the universe. I'll hope to expand on this in a later piece.
In this ledger defined by interaction, certain conditions in k-math create surfaces. The hypercomplex numbers show the behavior of interaction space. As individual entities they are unreal, but when they interact with each other, they create a dimension of reality.
Thinking this way you can take a generator to represent a substrate of latent potential, a substrate of reality that is not yet real until it interacts with another substrate. So if you assume the universe is built from this interaction math then you define a unit of interaction space as a system.
Let's define a system where each of these substrates are independent entities z = xε_1 + yε_2. Now, if you take this system and define it as an interaction by squaring it, a manifold falls out. z^2 = (xε_1 + yε_2)^2 = x^2ε_1^2 + y^2ε_2^2 + 2xyε_1ε_2, three dimensions from two variables. And depending on the k-value of the substrates you can get differing types of manifolds. This manifold represents a probability surface of interaction; an
interaction is a probability collapsing to a point on the surface. The surface is the holographic surface that emerges from interaction. It is upon this surface that we as humans traverse and that our reality is built.
This is the origin of a "goal surface". Most simply defined a goal surface is a manifold over which an intelligence traverses to reach a goal. A human walks across the surface of the earth, a jet engine traverses airspace, a task drives towards a metric. It is not something that needs to be designed or programmed in, instead it emerges naturally from clearly defining two variables assuming an interaction space. I will expand on this in a later piece on the nature of intelligence and what it means to traverse a goal surface.
So now we have our core principles that I'll weave throughout my thought experiments. Interaction space and naturally arising dimensions from possible interaction space — from those two primitives alone, we have a surface and we can start to outline what it means to traverse that surface and outline what an intelligence requires.
AI development has surfaced something worth examining: there are now two speeds at which things can happen in the digital world — human speed and machine speed — and something fundamental has shifted in the natural language layer.
AI development has surfaced something worth examining: there are now two speeds at which things can happen in the digital world. Digital automation was already moving far faster than humans could perceive, but something has shifted specifically in the space of language. Humans require time to read, write, and understand a body of text. LLMs accomplish the same tasks in a fraction of the time and, arguably, with consistently high performance. You can already see this in agentic AI — Claude Code, for instance, can sift through codebases and generate blocks of code in seconds. The same work could take a human minutes to hours.
This creates a schism. Before LLMs, digital things moved at human speed. UIs, text, and interfaces were designed for human interaction — and that remains true in agent-to-human contexts. With a human in the loop, the loop can only move as fast as the human can monitor or process its output. Agent-to-agent interactions are different. That loop can now move at the speed of the LLMs themselves. That's a fundamental shift with some profound implications.
To be clear, machine-speed digital processes already exist. Strictly defined algorithms have long driven automation beyond human speed. But I think something different is happening in the natural language layer of the digital world — the informal, open-ended, conversational space. Maybe it's the sheer volume of automation we're about to encounter. Human-to-human digital spaces — negotiating a couch sale on Craigslist, applying for jobs, navigating SaaS workflows, filing support tickets — can now be automated with natural language as the interface.
We briefly had an employee who imagined "two internets": one for humans, one for AIs. I don't think two literally separate internets will emerge — existing automation hasn't caused that kind of split. But as a framework for thinking about what's coming, it's a useful starting point. The digital world will accelerate drastically for AI-powered entities. Interactions that took minutes or hours for humans will take seconds. Maybe "two internets" isn't quite right, but the core idea stands: there are now two very different natural language processing speeds in play.
I don't know exactly where that leads. Maybe humans get squeezed out of digital spaces in ways that ultimately favor machines as the dominant "species." Or maybe it ushers in a world where we put down our phones and find our way back to the piazza. At the very least, commerce and digital interaction will look drastically different soon. The internet as we know it will be a different place within five years.
The themes here may seem random, but for me they are a representation of the intellectual journey I've been on over the last 2 years — profoundly moved by the development of AI, drawn into physics, philosophy, and what it means to persist and adapt.
The themes here may seem random, but for me they are a representation of the intellectual journey I've been on over the last 2 years. I have been profoundly moved by the development of AI. It has made me question everything we traditionally understood about the world and ourselves like I was in college again.
There are many ways to receive the daily onslaught of news on AI development. The scale and rate of change we will see is likely unprecedented in human history. What a time to be alive?! In experiencing a big delta in a short time, fear is a reasonable response. I've processed many different emotions as I've come to grips with the adaptations I will need to make in my own life.
I've also seen others traverse the same thought space and arrive at similar conclusions. It's a process that takes steps. As someone who works in this space and grapples daily with these advances, I believe I'm relatively well equipped to understand what's happening. I think the general populations will begin to undergo some sort of similar evolution.
It's not out of the question that humans will need to grow accustomed to a world where agentic software systems running autonomously will generate revenue or run a train station. Change is coming, but primitives and first principles are immutable. Adaptation is a first principle of persistence. So to thrive in the future, this intellectual evolution is a necessity.
I've landed in a place of acceptance, readiness, and excitement. Predicting the exact course of the future is a futile effort, but the trends are undeniable. Nations, companies, and individuals can position themselves for enormous success by charging into this new era, and I, professionally, am cautiously eager to dive into this future. Personally, my intellectual curiosity is revving.
Fueled by an existential need to thrive in this new era, core concepts in philosophy and physics have dug their roots into my brain. I find it wild that we are here, and we can't even fully define what "we" or "here" is! So these ideas and speculations are my honest attempt to understand our reality.
On Physics and Humanity.
If you examine the world through the laws of math and physics patterns emerge more clearly. Terms like friction, gravity, orthogonality, entropy, etc provide a deeper understanding. A simple example of that is to see human language as more than just sounds coming out of our face holes but more fundamentally as informational compression, akin to zipping a file.
The words (and expressions, and intonations, etc) that we use are compression of ideas and events and are fundamentally data transfer. The less efficient our words, the more friction and loss of energy that happens in our communication: entropy–more lossy compression. Compression makes it so that less compute or energy is required to process the vast input variables through prediction algorithms. In data science jargon, compression is finding the latent spaces with semantic clustering.
An LLM represents another form of this compression - the "internet" (ish) into a single weights file. The tokenization algorithm that reduces characters into numbers is also compression. A chatbot actualizes this through text conversation. A software agent can actualize digital action when given coding tools to operate within a computer or over networks. Again this is just one simple example, but if you apply mathematics and physics to your world, you will see foundational concepts, in this case compression, emerge everywhere.
So if truly you want to drill down to first principles, go all the way to the bottom. If you do this, you may start to draw analogies of fundamental physics to elements in your life. You may see businesses as attempts at non-dissipative systems, your relationships as harmonic or dissonant wave frequencies, or public figures as information attractors.
Now, these speculations might seem like pseudoscience from a podcast bro, but I have put intellectual rigor and focused imagination behind these thoughts. However, I'm still aware that it's likely these ideas are far from being fully accurate. I have not yet taken the time to apply existing mathematical formulas from areas like control theory, assembly theory, computational biology, information theory, etc, to these thoughts, but I think you could. Maybe if I have enough time one day, I will.
That said, we are bound by our humanity, and I embrace that. There is a beauty and tragedy to our existence. Love and pain are very real. So, while it may seem I'm coldly distilling human qualities—friendship, love, hatred, pain—into science and numbers, I am not trying to escape being human. It would be futile and frankly less fun.
On Feedback Loops, Machines, Business and Non-Dissipative Systems.
Feedback loops are everywhere. Intelligence isn't just a prediction model, but a whole system or feedback loop. Feedback loops are described in control theory with concepts like gain, actuation, positive and negative feedback, etc. Feedback loops have a "set point" which is the metric or goal for the feedback loop. This is akin to the "goal" for an intelligence to which Richard Sutton, the "father of reinforcement learning," often refers.
A closed loop is a system is a machine. And closed systems are fractal in nature; you can zoom in or out and see the sub-loops or parent loops of the systems. Those loops have the same characteristics of feedback loops.
You can easily describe a business as a machine made of many smaller feedback loops and systems. It can have very complex subsystems with set points like KPIs, autonomous miles driven, cold calls per week, etc. All of these subsystems serve up to the highest order set point of revenue or profit.
You could take a business, or any organization really, draw out every communication, SOP, dollar transferred, etc, and map it to its internal and external feedback loops. It is a machine–diagrammable, albeit with human wetware amok. Humans in these loops, if measured purely against productivity or energy dissipation, are not as good as deterministic machines. We posture, compete, have emotions, get sick, and fail frequently, making these loops inefficient at times. In the near term, LLM-driven agentic systems will be able to be applied to many of these loops.
I think one of the best exercises you could do as a business or any team lead today would be to diagram these loops and find where LLM-based agents could be applied. That is not to say you should do that just to enable a reduction in force, but instead to make existing loops more efficient and better allocate your humans.
An intelligence is a closed system with a goal; it turns entropy from mere dissipation into thrust to traverse its metric space–its "goals" surface.
The most efficient traversal systems are those that have the least energy leakage or unintentional dissipation. A jet engine with leaks would not be able to create as much thrust; a car engine with a cracked head gasket will not deliver as much power. Similarly, the best businesses are those with the least amount of money leakage. Usually this is achieved by a small subgroup of people "holding the wheels on" or a heavy-handed CEO ensuring value is being properly delivered to its customers.
If you go up and down the different dimensionality spaces, you find the same patterns of this intelligence system. A prokaryotic cell with its energy-barrier cell wall, has its goals of survival and reproduction. A business can be viewed as an intelligence with a main goal of procuring money, among other goals. The two systems are at much different scales and dimensionality, but they have the same principles. It just depends on where you draw the energy boundaries for those systems.
On Intelligence and Math.
I don't like that we call it 'artificial' intelligence. There's nothing artificial about it. Using that term we are simply applying wetware biases onto digital intelligence. To call the math used in these models artificial would be to call the way our brains work artificial. The math driving our LLM behavior is likely different from what is in our cortexes. It's simply math, and math is, most simply, prediction.
I hear folks in tech talk about prediction as a relatively novel concept. Prediction is a buzz word. "AI models make predictions," which is true. Making predictions of course is a fundamental concept of life. A predator predicts the movements of its prey, a tree predicts the changing of the season, a company predicts the macro economy conditions, etc.
Now, I won't wade into the debate of formalism vs Platonism in mathematics here, but math, most simply, is prediction. It's the prediction of the relationship between two (or more) variables. And I know I'm making some mathematicians cringe with these words as I am taking some liberties here, but consider the simple function f(x) = x. The function is making a prediction of the resulting value given x. In this simple example, the prediction is always right!
We as intelligences operate in a highly dimensional world with only snapshots of data. The math required to make predictions has to be complex and statistical. We make bets, update our priors, and repeat.
Take Bayesian math and put it into a highly dimensional space, given data and linear algebra, voila—prediction models. Create a loop by repeating that process measuring against a metric—intelligence. That's obviously an oversimplification of the work necessary to create LLMs, other AI weights files, and AI systems, but at its core AI is simply math or, if you're a formalist, least describable by math.
On LLMs, General Intelligence, and the near future.
What a tool LLMs have shown to be. It could be that "next-most-likely-token" algorithms with language are the biggest breakthrough we'll see in digital intelligence. My suspicion is that we will see even bigger breakthroughs in the near future than LLMs. For the immediate term, LLMs will dramatically accelerate many more breakthroughs. Scientists once encumbered by the tedium of coding have been unleashed to generate models at recursive scales.
Sutton emphasized that an intelligence needs a goal, a numeric metric by which to measure its performance (implicit in current LLM offerings is a goal of 'conversation,' it seems). But I also don't fully agree with Sutton's take, or I just misinterpreted him, that LLMs aren't as relevant for developing AGI. Intelligence is a process loop:
A closed system with a set point -> environment variables -> compression -> math (prediction) -> actuation -> measurement -> repeat.
The creation process of a LLM is the whole intelligence loop with a few of the steps manually taken by data scientists. General intelligence can be built by coordinating many of these loops. AGI is not some magic, singular model that will be able to do everything against any data.
We are at the advent of a Cambrian explosion of these autonomous digital intelligence loops with smaller specific goals. A general intelligence will be constructed by compiling these loops, all with specific metrics meant to serve an aggregate goal of a larger system. I wouldn't doubt that harnessing an agentic system with the math in computational biology is an example of the kind of autonomous systems we will start to see.
Maybe more wars will break out, or maybe rewarding jobs for humans that we could never have imagined will emerge. Maybe humans will start curing cancers en masse or develop novel energy grid systems. Humans currently still drive the goal of AI systems, but maybe out of the life-like forms that evolve out of this emerging digital world, autonomous digital systems will set their own goals.
Like a petri dish, massive data centers may pseudo-spontaneously spawn digital ecosystems. The lines will blur even more between the digital world and our 3d space as once purely digital actors, rest APIs, data centers, etc, actuate more autonomously into physical space.
It seems the cat is out of the bag here. As humans we could unplug these now recursively evolving machines, but we won't. Instead of combing through pull-requests, software engineers will soon be picking from AI-created executables based on their performance against general tests–hopping from improvement to improvement. That's evolution baby.
A friend of mine would argue that this is a natural, unavoidable evolution driven purely by thermodynamics. I'm pretty sure I agree. With recursively changing digital intelligences, the only rate limiter that bumps up their assembly index is the aperture of the energy gate given to them–the aggregate compute pumped through them.
Quantum gave us nuclear bombs but also nuclear power. There will be immense potential energy for destruction, but an equal amount of possibility for progress. International governance is a necessity.
I am likely not 100% correct here. Again, chaos theory principles prove that it is extremely difficult, i.e. impossible, to predict the future in this highly dimensional space. There may be hidden variables that naturally limit this digital evolution. Governance wielded eloquently may direct energy allocations appropriately. The future, as always, is uncertain.
At the very least, it should be very clear to anyone that we are about to experience drastic change–possibly on an order of magnitude of the industrial revolution and on a shorter time scale.