I consider myself an all-rounder in work and in life. I am married to an amazing woman who says I'm the funniest guy she knows. Though likely true, I'm aware she says this just to prop me up. We live in Denver with our three dogs.
I have a unique background in that I've worked many different jobs — from vineyards in New Zealand, construction in Austria, to software startups. I value my friendships deeply and have been called fiercely loyal, which I think could be seen as positive or negative at times.
I am a friendly person and can get along with anyone, but I have little patience for excess jargon and hand-wavey actions. To me, the truth is precious, and I love pursuing it. At the end of the day, I simply want to be the best version of myself, do good work that matters, and be around highly performant people who want the same.
AI development has surfaced something worth examining: there are now two speeds at which things can happen in the digital world — human speed and machine speed — and something fundamental has shifted in the natural language layer.
AI development has surfaced something worth examining: there are now two speeds at which things can happen in the digital world. Digital automation was already moving far faster than humans could perceive, but something has shifted specifically in the space of language. Humans require time to read, write, and understand a body of text. LLMs accomplish the same tasks in a fraction of the time and, arguably, with consistently high performance. You can already see this in agentic AI — Claude Code, for instance, can sift through codebases and generate blocks of code in seconds. The same work could take a human minutes to hours.
This creates a schism. Before LLMs, digital things moved at human speed. UIs, text, and interfaces were designed for human interaction — and that remains true in agent-to-human contexts. With a human in the loop, the loop can only move as fast as the human can monitor or process its output. Agent-to-agent interactions are different. That loop can now move at the speed of the LLMs themselves. That's a fundamental shift with some profound implications.
To be clear, machine-speed digital processes already exist. Strictly defined algorithms have long driven automation beyond human speed. But I think something different is happening in the natural language layer of the digital world — the informal, open-ended, conversational space. Maybe it's the sheer volume of automation we're about to encounter. Human-to-human digital spaces — negotiating a couch sale on Craigslist, applying for jobs, navigating SaaS workflows, filing support tickets — can now be automated with natural language as the interface.
We briefly had an employee who imagined "two internets": one for humans, one for AIs. I don't think two literally separate internets will emerge — existing automation hasn't caused that kind of split. But as a framework for thinking about what's coming, it's a useful starting point. The digital world will accelerate drastically for AI-powered entities. Interactions that took minutes or hours for humans will take seconds. Maybe "two internets" isn't quite right, but the core idea stands: there are now two very different natural language processing speeds in play.
I don't know exactly where that leads. Maybe humans get squeezed out of digital spaces in ways that ultimately favor machines as the dominant "species." Or maybe it ushers in a world where we put down our phones and find our way back to the piazza. At the very least, commerce and digital interaction will look drastically different soon. The internet as we know it will be a different place within five years.
Emergent behaviors and manifolds are naturally occurring phenomena from interactions of multi-dimensional systems. Using complex numbers, interaction space, and graph-ledger theories of reality, this piece proposes a mathematical framework for how surfaces — including goal surfaces — emerge structurally from any entity that exists, interacts, and implies its own possibility space.
A foundational idea in the next few of my pieces is the idea that emergent behaviors and manifolds are naturally occurring phenomena from interactions of multi dimensional systems. This piece covers an idea that I arrived at through using AI as a thought partner, originating from my own late night ideations and thought experiments. So, I would like to treat this idea as more of a thought experiment, as I'm fully aware of its speculative nature. There may be some mathematical rigor by which to prove or disprove this idea, but for now it is just a thought experiment that I have found to be reasonable and useful for my own thinking.
The idea of emergent behaviors in our reality is no new concept. Though, it seems to be appearing more frequently in popular conversations of math and science, or so I biasedly observe. In the scientifically rigorous ecosystem, Leonard Susskind, a theoretical physicist, theorizes that our reality is a holographic surface. This is actually a useful mechanism to approach thinking about emergent behaviors. The idea of a holographic surface is that our reality is a projection of a higher dimensional surface, and that the behaviors we observe of our reality are emergent from the interactions of the underlying dimensions.
Now what is an example of an emergent behavior? It seems quite ambiguous and ill-defined thus far. Let's look at "wetness" and "temperature." Water molecules (H₂O) are not wet. A single molecule has no wetness — it has charge, polarity, mass, and the ability to hydrogen bond. That's it. Wetness doesn't exist at the molecular level in any meaningful way. But get a few trillion of them together, and suddenly you have a liquid that clings to surfaces, has surface tension, and produces the sensation of wetness. None of those properties were "in" any individual molecule. They emerged purely from the interaction of many molecules at scale. Temperature is similar. A single molecule doesn't have a temperature. It has kinetic energy, but temperature is a statistical property that emerges from the collective behavior of many molecules bouncing around.
Theoretical physicists have made suggestions that foundational aspects of our human-experience reality are emergent behaviors. Space and time (the fabric) seem to be emergent from lower order properties. I think it's not unreasonable that gravity and energy will turn out to be emergent behaviors themselves. Any quantum theory of gravity would indicate that gravitational forces we experience are emergent from an aggregation of quantum gravitational entities.
I'm proposing a mathematical framework for emergences of manifolds as described by complex numbers and the dimensionality of complex planes. I will eventually relate this to intelligences in a later piece, so this piece is meant to lay some logical groundwork.
In this piece, there are three logical trust falls along this thought experiment, which such a necessity for "logical trust falls" adds to the speculative nature of this idea. But at the very least, I think it's a useful framework for thinking about emergence, and it has some interesting implications for how we understand our reality.
The first trust fall: Complex planes create dimensionality without new variables.
Take a number as a pure, standalone entity. For example the number 5. Alone, it is simply a stretching of the number line. You can stretch it more with addition, subtraction, multiplication. You can send it the other way on the number line with negative numbers. Yet any mathematical action taken on it with another real number keeps it in the first dimension. A complex number is defined as a + bi, where a and b are real numbers and i is the square root of -1. This is an operation on a single number that creates a new dimension without introducing a new variable. Yes our variable "a", let's keep it 5, had a second variable "b" introduced to it, but b only represents the real scalar of this imaginary number. The operation never introduced a new variable, only an operation with a number that isn't representable in the same dimension but exists nonetheless. "Imaginary numbers" being a misnomer for what they actually are, which is a mathematical operation that creates a new dimension.
And something interesting happens when you multiply real numbers by imaginary numbers (again poorly named). You naturally create an emergent dimension — the operation rotates a real number 90 degrees out of the first dimension into a second (the complex plane). Any operation on a real number with a real number will again stretch the number along the number line in the same direction or, if multiplying by negative, rotate it 180 degrees. But multiplying by an imaginary number rotates it 90 degrees, creating a new dimension. This is a fundamental operation that creates emergent behaviors.
The second trust fall: Interaction space.
Loop quantum theories describe the universe as a network of interactions, a "graph" model in a way. Interaction defines our reality — the edge or connection between each node is an observable of reality. There is no reality without interaction. A photon is not a thing until it interacts with another quantum entity. In this way you can see why quantum entanglement seems to send information faster than the speed of light. At a lower layer of this graph ledger (yes speculatively) time and space don't exist in the same way we humans experience it. There is just interaction: nodes and edges. It could be something like planck scale "proofs of work" like a blockchain ledger. Time and space emerge as a consequence of the dimensionality of this interaction space. Time doesn't exist in seconds at this layer, but rather in "tick space" — each tick creating a new stage of the graph of the universe.
Now, I think there are probably mathematical conclusions and proofs to make about reality using a "graph ledger" theory of reality, and I will explore that one day down the road. For now, we will use "interaction" space as foundational to reality. Take our variable, a = 5. You could argue that even 5, as much as we want to abstract ourselves away from this idea of a standalone real number on a number line, even our variable cannot exist without us "observing" it. Nothing exists without an observer, an interaction. We can try as we might to imagine an abstract standalone number on the number line existing in a vacuum of nothing, but we cannot abstract ourselves away from this system. By observing a number line, we by default make ourselves a part of that system — we've observed it. A simple function of f(x) = x has a third variable of interaction, which is us, the creator and observer of the function. So it is inescapable that all math by default has a built-in framework of interaction.
The third trust fall: Dimensionality in interaction space creates surfaces (manifolds).
Now that we've defined z in interaction space, zi = xi + iyi is the full representation of an entity as the sum of its realized interactions xi and its latent interaction potential iyi — the interactions its current state implies are available to it but have not yet occurred. xi is what the entity has become through interaction. iyi is what that becoming implies it could still interact with.
My argument is that the dimensionality created by the complex plane in interaction space is the substrate from which manifolds/surfaces emerge. Even a single variable can create a rich manifold by the dimensionality created from a single variable in its current state and its potential state. Two axes emerge from a single variable existing in interaction space. The real axis is the current state of the entity, and the imaginary axis is the potential state of the entity. The complex plane is the space in which these two axes interact, and it is from the tension between these variables that surfaces emerge.
So, the emergent behaviors we see in our reality can be seen as the interaction of all of these different dimensionalities of real entities, the universal substrate, and their possibility space. Emergent behaviors occur by virtue of this interaction space — not by accident, and not by design, but as a structural consequence of entities that exist, interact, and therefore imply their own possibility space.
This is the origin of a goal surface. Most simply defined it emerges when a variable is defined, has an actual current state, and a desired future state. I will expand on this in a later piece on the nature of intelligence and what it means to traverse a goal surface. But for now, the important point is that a goal surface is an emergent property of the interaction of an entity with itself in interaction space. It is not something that needs to be designed or programmed in, but rather it emerges naturally from clearly defining a variable and its potential state.
So now we have our core principles that I'll weave throughout my thought experiments. Interaction space and naturally arising dimensions from possible interaction space — and from those two primitives alone, we have a surface and we can start to outline what it means to traverse that surface and by that outline what an intelligence requires.
The themes here may seem random, but for me they are a representation of the intellectual journey I've been on over the last 2 years — profoundly moved by the development of AI, drawn into physics, philosophy, and what it means to persist and adapt.
The themes here may seem random, but for me they are a representation of the intellectual journey I've been on over the last 2 years. I have been profoundly moved by the development of AI. It has made me question everything we traditionally understood about the world and ourselves like I was in college again.
There are many ways to receive the daily onslaught of news on AI development. The scale and rate of change we will see is likely unprecedented in human history. What a time to be alive?! In experiencing a big delta in a short time, fear is a reasonable response. I've processed many different emotions as I've come to grips with the adaptations I will need to make in my own life.
I've also seen others traverse the same thought space and arrive at similar conclusions. It's a process that takes steps. As someone who works in this space and grapples daily with these advances, I believe I'm relatively well equipped to understand what's happening. I think the general populations will begin to undergo some sort of similar evolution.
It's not out of the question that humans will need to grow accustomed to a world where agentic software systems running autonomously will generate revenue or run a train station. Change is coming, but primitives and first principles are immutable. Adaptation is a first principle of persistence. So to thrive in the future, this intellectual evolution is a necessity.
I've landed in a place of acceptance, readiness, and excitement. Predicting the exact course of the future is a futile effort, but the trends are undeniable. Nations, companies, and individuals can position themselves for enormous success by charging into this new era, and I, professionally, am cautiously eager to dive into this future. Personally, my intellectual curiosity is revving.
Fueled by an existential need to thrive in this new era, core concepts in philosophy and physics have dug their roots into my brain. I find it wild that we are here, and we can't even fully define what "we" or "here" is! So these ideas and speculations are my honest attempt to understand our reality.
On Physics and Humanity.
If you examine the world through the laws of math and physics patterns emerge more clearly. Terms like friction, gravity, orthogonality, entropy, etc provide a deeper understanding. A simple example of that is to see human language as more than just sounds coming out of our face holes but more fundamentally as informational compression, akin to zipping a file.
The words (and expressions, and intonations, etc) that we use are compression of ideas and events and are fundamentally data transfer. The less efficient our words, the more friction and loss of energy that happens in our communication: entropy–more lossy compression. Compression makes it so that less compute or energy is required to process the vast input variables through prediction algorithms. In data science jargon, compression is finding the latent spaces with semantic clustering.
An LLM represents another form of this compression - the "internet" (ish) into a single weights file. The tokenization algorithm that reduces characters into numbers is also compression. A chatbot actualizes this through text conversation. A software agent can actualize digital action when given coding tools to operate within a computer or over networks. Again this is just one simple example, but if you apply mathematics and physics to your world, you will see foundational concepts, in this case compression, emerge everywhere.
So if truly you want to drill down to first principles, go all the way to the bottom. If you do this, you may start to draw analogies of fundamental physics to elements in your life. You may see businesses as attempts at non-dissipative systems, your relationships as harmonic or dissonant wave frequencies, or public figures as information attractors.
Now, these speculations might seem like pseudoscience from a podcast bro, but I have put intellectual rigor and focused imagination behind these thoughts. However, I'm still aware that it's likely these ideas are far from being fully accurate. I have not yet taken the time to apply existing mathematical formulas from areas like control theory, assembly theory, computational biology, information theory, etc, to these thoughts, but I think you could. Maybe if I have enough time one day, I will.
That said, we are bound by our humanity, and I embrace that. There is a beauty and tragedy to our existence. Love and pain are very real. So, while it may seem I'm coldly distilling human qualities—friendship, love, hatred, pain—into science and numbers, I am not trying to escape being human. It would be futile and frankly less fun.
On Feedback Loops, Machines, Business and Non-Dissipative Systems.
Feedback loops are everywhere. Intelligence isn't just a prediction model, but a whole system or feedback loop. Feedback loops are described in control theory with concepts like gain, actuation, positive and negative feedback, etc. Feedback loops have a "set point" which is the metric or goal for the feedback loop. This is akin to the "goal" for an intelligence to which Richard Sutton, the "father of reinforcement learning," often refers.
A closed loop is a system is a machine. And closed systems are fractal in nature; you can zoom in or out and see the sub-loops or parent loops of the systems. Those loops have the same characteristics of feedback loops.
You can easily describe a business as a machine made of many smaller feedback loops and systems. It can have very complex subsystems with set points like KPIs, autonomous miles driven, cold calls per week, etc. All of these subsystems serve up to the highest order set point of revenue or profit.
You could take a business, or any organization really, draw out every communication, SOP, dollar transferred, etc, and map it to its internal and external feedback loops. It is a machine–diagrammable, albeit with human wetware amok. Humans in these loops, if measured purely against productivity or energy dissipation, are not as good as deterministic machines. We posture, compete, have emotions, get sick, and fail frequently, making these loops inefficient at times. In the near term, LLM-driven agentic systems will be able to be applied to many of these loops.
I think one of the best exercises you could do as a business or any team lead today would be to diagram these loops and find where LLM-based agents could be applied. That is not to say you should do that just to enable a reduction in force, but instead to make existing loops more efficient and better allocate your humans.
An intelligence is a closed system with a goal; it turns entropy from mere dissipation into thrust to traverse its metric space–its "goals" surface.
The most efficient traversal systems are those that have the least energy leakage or unintentional dissipation. A jet engine with leaks would not be able to create as much thrust; a car engine with a cracked head gasket will not deliver as much power. Similarly, the best businesses are those with the least amount of money leakage. Usually this is achieved by a small subgroup of people "holding the wheels on" or a heavy-handed CEO ensuring value is being properly delivered to its customers.
If you go up and down the different dimensionality spaces, you find the same patterns of this intelligence system. A prokaryotic cell with its energy-barrier cell wall, has its goals of survival and reproduction. A business can be viewed as an intelligence with a main goal of procuring money, among other goals. The two systems are at much different scales and dimensionality, but they have the same principles. It just depends on where you draw the energy boundaries for those systems.
On Intelligence and Math.
I don't like that we call it 'artificial' intelligence. There's nothing artificial about it. Using that term we are simply applying wetware biases onto digital intelligence. To call the math used in these models artificial would be to call the way our brains work artificial. The math driving our LLM behavior is likely different from what is in our cortexes. It's simply math, and math is, most simply, prediction.
I hear folks in tech talk about prediction as a relatively novel concept. Prediction is a buzz word. "AI models make predictions," which is true. Making predictions of course is a fundamental concept of life. A predator predicts the movements of its prey, a tree predicts the changing of the season, a company predicts the macro economy conditions, etc.
Now, I won't wade into the debate of formalism vs Platonism in mathematics here, but math, most simply, is prediction. It's the prediction of the relationship between two (or more) variables. And I know I'm making some mathematicians cringe with these words as I am taking some liberties here, but consider the simple function f(x) = x. The function is making a prediction of the resulting value given x. In this simple example, the prediction is always right!
We as intelligences operate in a highly dimensional world with only snapshots of data. The math required to make predictions has to be complex and statistical. We make bets, update our priors, and repeat.
Take Bayesian math and put it into a highly dimensional space, given data and linear algebra, voila—prediction models. Create a loop by repeating that process measuring against a metric—intelligence. That's obviously an oversimplification of the work necessary to create LLMs, other AI weights files, and AI systems, but at its core AI is simply math or, if you're a formalist, least describable by math.
On LLMs, General Intelligence, and the near future.
What a tool LLMs have shown to be. It could be that "next-most-likely-token" algorithms with language are the biggest breakthrough we'll see in digital intelligence. My suspicion is that we will see even bigger breakthroughs in the near future than LLMs. For the immediate term, LLMs will dramatically accelerate many more breakthroughs. Scientists once encumbered by the tedium of coding have been unleashed to generate models at recursive scales.
Sutton emphasized that an intelligence needs a goal, a numeric metric by which to measure its performance (implicit in current LLM offerings is a goal of 'conversation,' it seems). But I also don't fully agree with Sutton's take, or I just misinterpreted him, that LLMs aren't as relevant for developing AGI. Intelligence is a process loop:
A closed system with a set point -> environment variables -> compression -> math (prediction) -> actuation -> measurement -> repeat.
The creation process of a LLM is the whole intelligence loop with a few of the steps manually taken by data scientists. General intelligence can be built by coordinating many of these loops. AGI is not some magic, singular model that will be able to do everything against any data.
We are at the advent of a Cambrian explosion of these autonomous digital intelligence loops with smaller specific goals. A general intelligence will be constructed by compiling these loops, all with specific metrics meant to serve an aggregate goal of a larger system. I wouldn't doubt that harnessing an agentic system with the math in computational biology is an example of the kind of autonomous systems we will start to see.
Maybe more wars will break out, or maybe rewarding jobs for humans that we could never have imagined will emerge. Maybe humans will start curing cancers en masse or develop novel energy grid systems. Humans currently still drive the goal of AI systems, but maybe out of the life-like forms that evolve out of this emerging digital world, autonomous digital systems will set their own goals.
Like a petri dish, massive data centers may pseudo-spontaneously spawn digital ecosystems. The lines will blur even more between the digital world and our 3d space as once purely digital actors, rest APIs, data centers, etc, actuate more autonomously into physical space.
It seems the cat is out of the bag here. As humans we could unplug these now recursively evolving machines, but we won't. Instead of combing through pull-requests, software engineers will soon be picking from AI-created executables based on their performance against general tests–hopping from improvement to improvement. That's evolution baby.
A friend of mine would argue that this is a natural, unavoidable evolution driven purely by thermodynamics. I'm pretty sure I agree. With recursively changing digital intelligences, the only rate limiter that bumps up their assembly index is the aperture of the energy gate given to them–the aggregate compute pumped through them.
Quantum gave us nuclear bombs but also nuclear power. There will be immense potential energy for destruction, but an equal amount of possibility for progress. International governance is a necessity.
I am likely not 100% correct here. Again, chaos theory principles prove that it is extremely difficult, i.e. impossible, to predict the future in this highly dimensional space. There may be hidden variables that naturally limit this digital evolution. Governance wielded eloquently may direct energy allocations appropriately. The future, as always, is uncertain.
At the very least, it should be very clear to anyone that we are about to experience drastic change–possibly on an order of magnitude of the industrial revolution and on a shorter time scale.