I consider myself an all-rounder in work and in life. I am married to an amazing woman who says I'm the funniest guy she knows. Though likely true, I'm aware she says this just to prop me up. We live in Denver with our three dogs.
I have a unique background in that I've worked many different jobs — from vineyards in New Zealand, construction in Austria, to software startups. I value my friendships deeply and have been called fiercely loyal, which I think could be seen as positive or negative at times.
I am a friendly person and can get along with anyone, but I have little patience for excess jargon and hand-wavey actions. To me, the truth is precious, and I love pursuing it. At the end of the day, I simply want to be the best version of myself, do good work that matters, and be around highly performant people who want the same.
The themes here may seem random, but for me they are a representation of the intellectual journey I've been on over the last 2 years — profoundly moved by the development of AI, drawn into physics, philosophy, and what it means to persist and adapt.
The themes here may seem random, but for me they are a representation of the intellectual journey I've been on over the last 2 years. I have been profoundly moved by the development of AI. It has made me question everything we traditionally understood about the world and ourselves like I was in college again.
There are many ways to receive the daily onslaught of news on AI development. The scale and rate of change we will see is likely unprecedented in human history. What a time to be alive?! In experiencing a big delta in a short time, fear is a reasonable response. I've processed many different emotions as I've come to grips with the adaptations I will need to make in my own life.
I've also seen others traverse the same thought space and arrive at similar conclusions. It's a process that takes steps. As someone who works in this space and grapples daily with these advances, I believe I'm relatively well equipped to understand what's happening. I think the general populations will begin to undergo some sort of similar evolution.
It's not out of the question that humans will need to grow accustomed to a world where agentic software systems running autonomously will generate revenue or run a train station. Change is coming, but primitives and first principles are immutable. Adaptation is a first principle of persistence. So to thrive in the future, this intellectual evolution is a necessity.
I've landed in a place of acceptance, readiness, and excitement. Predicting the exact course of the future is a futile effort, but the trends are undeniable. Nations, companies, and individuals can position themselves for enormous success by charging into this new era, and I, professionally, am cautiously eager to dive into this future. Personally, my intellectual curiosity is revving.
Fueled by an existential need to thrive in this new era, core concepts in philosophy and physics have dug their roots into my brain. I find it wild that we are here, and we can't even fully define what "we" or "here" is! So these ideas and speculations are my honest attempt to understand our reality.
On Physics and Humanity.
If you examine the world through the laws of math and physics patterns emerge more clearly. Terms like friction, gravity, orthogonality, entropy, etc provide a deeper understanding. A simple example of that is to see human language as more than just sounds coming out of our face holes but more fundamentally as informational compression, akin to zipping a file.
The words (and expressions, and intonations, etc) that we use are compression of ideas and events and are fundamentally data transfer. The less efficient our words, the more friction and loss of energy that happens in our communication: entropy–more lossy compression. Compression makes it so that less compute or energy is required to process the vast input variables through prediction algorithms. In data science jargon, compression is finding the latent spaces with semantic clustering.
An LLM represents another form of this compression - the "internet" (ish) into a single weights file. The tokenization algorithm that reduces characters into numbers is also compression. A chatbot actualizes this through text conversation. A software agent can actualize digital action when given coding tools to operate within a computer or over networks. Again this is just one simple example, but if you apply mathematics and physics to your world, you will see foundational concepts, in this case compression, emerge everywhere.
So if truly you want to drill down to first principles, go all the way to the bottom. If you do this, you may start to draw analogies of fundamental physics to elements in your life. You may see businesses as attempts at non-dissipative systems, your relationships as harmonic or dissonant wave frequencies, or public figures as information attractors.
Now, these speculations might seem like pseudoscience from a podcast bro, but I have put intellectual rigor and focused imagination behind these thoughts. However, I'm still aware that it's likely these ideas are far from being fully accurate. I have not yet taken the time to apply existing mathematical formulas from areas like control theory, assembly theory, computational biology, information theory, etc, to these thoughts, but I think you could. Maybe if I have enough time one day, I will.
That said, we are bound by our humanity, and I embrace that. There is a beauty and tragedy to our existence. Love and pain are very real. So, while it may seem I'm coldly distilling human qualities—friendship, love, hatred, pain—into science and numbers, I am not trying to escape being human. It would be futile and frankly less fun.
On Feedback Loops, Machines, Business and Non-Dissipative Systems.
Feedback loops are everywhere. Intelligence isn't just a prediction model, but a whole system or feedback loop. Feedback loops are described in control theory with concepts like gain, actuation, positive and negative feedback, etc. Feedback loops have a "set point" which is the metric or goal for the feedback loop. This is akin to the "goal" for an intelligence to which Richard Sutton, the "father of reinforcement learning," often refers.
A closed loop is a system is a machine. And closed systems are fractal in nature; you can zoom in or out and see the sub-loops or parent loops of the systems. Those loops have the same characteristics of feedback loops.
You can easily describe a business as a machine made of many smaller feedback loops and systems. It can have very complex subsystems with set points like KPIs, autonomous miles driven, cold calls per week, etc. All of these subsystems serve up to the highest order set point of revenue or profit.
You could take a business, or any organization really, draw out every communication, SOP, dollar transferred, etc, and map it to its internal and external feedback loops. It is a machine–diagrammable, albeit with human wetware amok. Humans in these loops, if measured purely against productivity or energy dissipation, are not as good as deterministic machines. We posture, compete, have emotions, get sick, and fail frequently, making these loops inefficient at times. In the near term, LLM-driven agentic systems will be able to be applied to many of these loops.
I think one of the best exercises you could do as a business or any team lead today would be to diagram these loops and find where LLM-based agents could be applied. That is not to say you should do that just to enable a reduction in force, but instead to make existing loops more efficient and better allocate your humans.
An intelligence is a closed system with a goal; it turns entropy from mere dissipation into thrust to traverse its metric space–its "goals" surface.
The most efficient traversal systems are those that have the least energy leakage or unintentional dissipation. A jet engine with leaks would not be able to create as much thrust; a car engine with a cracked head gasket will not deliver as much power. Similarly, the best businesses are those with the least amount of money leakage. Usually this is achieved by a small subgroup of people "holding the wheels on" or a heavy-handed CEO ensuring value is being properly delivered to its customers.
If you go up and down the different dimensionality spaces, you find the same patterns of this intelligence system. A prokaryotic cell with its energy-barrier cell wall, has its goals of survival and reproduction. A business can be viewed as an intelligence with a main goal of procuring money, among other goals. The two systems are at much different scales and dimensionality, but they have the same principles. It just depends on where you draw the energy boundaries for those systems.
On Intelligence and Math.
I don't like that we call it 'artificial' intelligence. There's nothing artificial about it. Using that term we are simply applying wetware biases onto digital intelligence. To call the math used in these models artificial would be to call the way our brains work artificial. The math driving our LLM behavior is likely different from what is in our cortexes. It's simply math, and math is, most simply, prediction.
I hear folks in tech talk about prediction as a relatively novel concept. Prediction is a buzz word. "AI models make predictions," which is true. Making predictions of course is a fundamental concept of life. A predator predicts the movements of its prey, a tree predicts the changing of the season, a company predicts the macro economy conditions, etc.
Now, I won't wade into the debate of formalism vs Platonism in mathematics here, but math, most simply, is prediction. It's the prediction of the relationship between two (or more) variables. And I know I'm making some mathematicians cringe with these words as I am taking some liberties here, but consider the simple function f(x) = x. The function is making a prediction of the resulting value given x. In this simple example, the prediction is always right!
We as intelligences operate in a highly dimensional world with only snapshots of data. The math required to make predictions has to be complex and statistical. We make bets, update our priors, and repeat.
Take Bayesian math and put it into a highly dimensional space, given data and linear algebra, voila—prediction models. Create a loop by repeating that process measuring against a metric—intelligence. That's obviously an oversimplification of the work necessary to create LLMs, other AI weights files, and AI systems, but at its core AI is simply math or, if you're a formalist, least describable by math.
On LLMs, General Intelligence, and the near future.
What a tool LLMs have shown to be. It could be that "next-most-likely-token" algorithms with language are the biggest breakthrough we'll see in digital intelligence. My suspicion is that we will see even bigger breakthroughs in the near future than LLMs. For the immediate term, LLMs will dramatically accelerate many more breakthroughs. Scientists once encumbered by the tedium of coding have been unleashed to generate models at recursive scales.
Sutton emphasized that an intelligence needs a goal, a numeric metric by which to measure its performance (implicit in current LLM offerings is a goal of 'conversation,' it seems). But I also don't fully agree with Sutton's take, or I just misinterpreted him, that LLMs aren't as relevant for developing AGI. Intelligence is a process loop:
A closed system with a set point -> environment variables -> compression -> math (prediction) -> actuation -> measurement -> repeat.
The creation process of a LLM is the whole intelligence loop with a few of the steps manually taken by data scientists. General intelligence can be built by coordinating many of these loops. AGI is not some magic, singular model that will be able to do everything against any data.
We are at the advent of a Cambrian explosion of these autonomous digital intelligence loops with smaller specific goals. A general intelligence will be constructed by compiling these loops, all with specific metrics meant to serve an aggregate goal of a larger system. I wouldn't doubt that harnessing an agentic system with the math in computational biology is an example of the kind of autonomous systems we will start to see.
Maybe more wars will break out, or maybe rewarding jobs for humans that we could never have imagined will emerge. Maybe humans will start curing cancers en masse or develop novel energy grid systems. Humans currently still drive the goal of AI systems, but maybe out of the life-like forms that evolve out of this emerging digital world, autonomous digital systems will set their own goals.
Like a petri dish, massive data centers may pseudo-spontaneously spawn digital ecosystems. The lines will blur even more between the digital world and our 3d space as once purely digital actors, rest APIs, data centers, etc, actuate more autonomously into physical space.
It seems the cat is out of the bag here. As humans we could unplug these now recursively evolving machines, but we won't. Instead of combing through pull-requests, software engineers will soon be picking from AI-created executables based on their performance against general tests–hopping from improvement to improvement. That's evolution baby.
A friend of mine would argue that this is a natural, unavoidable evolution driven purely by thermodynamics. I'm pretty sure I agree. With recursively changing digital intelligences, the only rate limiter that bumps up their assembly index is the aperture of the energy gate given to them–the aggregate compute pumped through them.
Quantum gave us nuclear bombs but also nuclear power. There will be immense potential energy for destruction, but an equal amount of possibility for progress. International governance is a necessity.
I am likely not 100% correct here. Again, chaos theory principles prove that it is extremely difficult, i.e. impossible, to predict the future in this highly dimensional space. There may be hidden variables that naturally limit this digital evolution. Governance wielded eloquently may direct energy allocations appropriately. The future, as always, is uncertain.
At the very least, it should be very clear to anyone that we are about to experience drastic change–possibly on an order of magnitude of the industrial revolution and on a shorter time scale.