
The Return of Exponential Growth: How AI Reignited What Moore's Law Left Behind
For fifty years, one observation governed the trajectory of modern civilization more reliably than any economic theory, political doctrine, or scientific law outside of physics: Moore's Law. Coined by Gordon Moore in 1965 and later refined, it predicted that the number of transistors on a chip would double approximately every two years, driving exponential gains in computing power at steadily declining costs. It was not a law of nature. It was an empirical observation that became a self-fulfilling prophecy, as the entire semiconductor industry organized itself around the expectation of relentless, predictable progress.
And for decades, reality cooperated. The Intel 4004 in 1971 had 2,300 transistors. By 2015, chips had surpassed ten billion. Each doubling brought faster processors, cheaper memory, smaller devices, and entirely new categories of technology that were unimaginable a generation before. Personal computers, the internet, smartphones, cloud computing — all of it rode the same exponential wave. The world grew accustomed to the idea that computing would keep getting better, faster, and cheaper forever.
Then the wave began to break.
The Physics Wall
By the early 2010s, the signs were hard to ignore. Clock speeds had already plateaued around 2005, when Intel abandoned its push beyond 4 GHz due to thermal limits — a chip that ran hotter than a nuclear reactor surface per square centimeter was not a viable consumer product. The industry pivoted to multi-core architectures, but that was a workaround, not a continuation of the original curve. Software had to be rewritten to exploit parallelism, and not all problems decomposed neatly into parallel tasks.
The deeper issue was physical. Transistor gates were shrinking toward dimensions where quantum effects became inescapable. At the 7-nanometer node, electrons began tunneling through barriers that were supposed to be solid. At 5 nanometers, the distinction between "on" and "off" started to blur. Each successive node required exponentially more capital — a state-of-the-art fabrication plant now costs upward of $20 billion — while delivering diminishing returns in performance per watt.
Dennard scaling, the companion principle that allowed smaller transistors to also be more power-efficient, had already failed by 2006. Moore's Law was still technically alive in transistor counts, but the performance gains it once guaranteed were slowing. The exponential curve that had defined the digital age was bending toward a plateau.
By 2015, the mood in the semiconductor industry had shifted from confidence to concern. ITRS, the International Technology Roadmap for Semiconductors, effectively disbanded, replaced by a more cautious successor. Papers began appearing with titles like "The End of Moore's Law" — not as provocation, but as sober engineering assessment. The era of automatic, guaranteed progress was ending.
The Quiet Revolution
What happened next was not a single breakthrough but a convergence. While the semiconductor industry was wrestling with physics, a different exponential was quietly gathering momentum — one driven not by hardware shrinkage but by algorithmic insight, data accumulation, and architectural innovation in software.
The seeds had been planted decades earlier. Neural networks, first theorized in the 1940s and periodically revived, had always been limited by two things: insufficient data and insufficient compute. By the mid-2010s, both constraints were dissolving simultaneously. The internet had generated more data than any researcher could have imagined in 1990. Cloud computing had made massive computational resources available on demand. And a series of algorithmic breakthroughs — batch normalization, residual connections, attention mechanisms — had made deep networks trainable at scales previously considered impractical.
The inflection point came quietly. In 2012, AlexNet won the ImageNet competition by a margin that shocked the computer vision community. In 2016, AlphaGo defeated Lee Sedol. In 2017, the Transformer architecture was published in a paper titled "Attention Is All You Need" — a modest title for what would become the most consequential machine learning paper of the decade. In 2020, GPT-3 demonstrated that scaling language models produced emergent capabilities that no one had explicitly programmed.
Each of these milestones was impressive on its own. Together, they described a new exponential curve — one where capability was doubling not because transistors were shrinking, but because models were scaling, architectures were improving, and the interplay between software and hardware was creating compounding returns that transcended any single component.
The 2022 Inflection
If you had to pick a single year when the world noticed, it would be 2022. The release of ChatGPT in November of that year did not represent a fundamental scientific breakthrough — the underlying architecture had been published five years earlier, and GPT-3 had already demonstrated the approach at scale. What it did was make the exponential visible to everyone.
Within two months, it had over 100 million users. Within a year, every major technology company had reorganized its strategy around generative AI. Investment in AI infrastructure exceeded $100 billion annually. NVIDIA, a company that made graphics cards for gamers, became one of the most valuable companies on earth because its chips happened to be ideal for the matrix multiplications that neural networks required.
The economic implications were immediate and profound. Coding assistants could generate functional software from natural language descriptions. Image generation models could produce photorealistic content in seconds. Language models could summarize legal documents, write marketing copy, analyze financial statements, and tutor students — all tasks that had previously required years of human training and expertise.
But the deeper significance was structural. A new exponential growth curve had emerged, and it was not constrained by the same physics that limited semiconductor scaling. The performance of AI systems was improving through multiple simultaneous vectors: better architectures, more efficient training methods, larger datasets, improved hardware utilization, and — crucially — the ability of AI systems to accelerate the development of the next generation of AI systems.
The New Exponential
The original Moore's Law was linear in its mechanism — make transistors smaller, put more on a chip, get more compute. The new exponential is fundamentally different. It operates through feedback loops that compound across multiple domains simultaneously.
AI systems design better chips. Better chips train better AI systems. Better AI systems discover more efficient algorithms. More efficient algorithms require less compute to achieve the same results, or achieve better results with the same compute. Each improvement creates leverage for the next, and the rate of improvement itself is accelerating.
This is not a metaphor or a projection. It is observable in the data. The compute used to train frontier AI models has been increasing by roughly 4x per year — far faster than Moore's Law ever delivered. The cost to achieve a given level of AI performance has been dropping by approximately 10x per year. The capabilities of the best models have been expanding into domains — mathematical reasoning, scientific discovery, creative synthesis — that were considered decades away as recently as 2020.
Hardware continues to advance, but it is no longer the sole driver. TSMC and Samsung are still pushing to smaller nodes, now at 2 nanometers and below, using extreme ultraviolet lithography and gate-all-around transistor designs. These advances matter. But they are now one input among many in a system where software innovation, data engineering, and systems architecture contribute at least as much to the overall performance trajectory.
What This Means
The implications extend far beyond the technology industry. When computing power was the primary bottleneck, progress in every field that depended on computation — biology, materials science, climate modeling, drug discovery, financial analysis — was gated by hardware availability. The new exponential removes that gate, not by making hardware infinitely fast, but by making software dramatically more capable of extracting value from the hardware that exists.
A pharmaceutical company that once needed years and billions of dollars to screen drug candidates can now use AI to predict molecular interactions and narrow the search space by orders of magnitude. A materials scientist who once relied on trial-and-error synthesis can now use generative models to propose novel compounds with desired properties. A financial analyst who once spent hours reading earnings transcripts can now have an AI system extract the key signals in seconds.
These are not future possibilities. They are current realities, and they are improving at exponential rates.
The stalling of Moore's Law was real. The end of exponential growth was not. What changed was the source of the exponential — from atoms to algorithms, from hardware to intelligence itself. The curve did not flatten. It jumped to a new axis.
For those building in this landscape, the lesson is clear: the companies and institutions that recognize this shift and position themselves at the intersection of AI capability and domain expertise will define the next era. The ones that wait for the old exponential to resume will wait forever.
The future does not slow down. It changes shape.

