What top founders are writing about right now: AGI control, brand collapse, and the computing reset
A five-founder digest covering the most substantive long-form writing from Feb–May 2026: Sam Altman on AGI control, Paul Graham on brand and commoditization, the Collisons on agentic commerce, Reid Hoffman on Silicon Valley's ideological fault lines, and Jensen Huang on the computing reset.
Four pieces published between late February and mid-May 2026 by Sam Altman, Paul Graham, Patrick and John Collison, Reid Hoffman, and Jensen Huang. They don't share a single thesis — but read together, they sketch a coherent picture of the pressures early-stage AI founders will face this year.
Sam Altman: the psychology of wanting to control AGI
Published: April 10, 2026 · blog.samaltman.com
The circumstances are hard to ignore. A Molotov cocktail was thrown at Sam Altman's house at 3:45 AM the night before this post went up. He published it in the middle of the night, describing himself as "pissed." The post includes a family photo — Altman leans over toward his toddler at a birthday table — which he says explicitly is meant to discourage anyone thinking about a next attack. 1
That context matters for reading the rest of it, because the post is structured less like a crisis statement and more like a considered manifesto. Altman divides it into three parts: five normative beliefs, a decade of personal retrospective, and a structural diagnosis of the AI industry.
The five beliefs are worth taking seriously as a framework:
- Advancing science and expanding prosperity is a moral obligation.
- AI is the most powerful tool ever built, and demand for it is "uncapped."
- Fear about AI is warranted — a society-wide safety response is urgently needed, not just technical alignment work.
- AI must be democratized; no single actor should concentrate its power.
- Adaptability is essential, because "no one understands the impacts of superintelligence yet." 1
The diagnosis in the third section is the part that will age most interestingly. Altman names a dynamic he calls the "ring of power":
"Once you see AGI you can't unsee it. It has a real 'ring of power' dynamic to it, and makes people do crazy things." 1
His argument is that the drama between AI labs — including his own conflicts, including the board ousting, including the Musk litigation — isn't fundamentally about the technology. It's about the psychology of believing you're the one who should control it. The cure, in his framing, is structural: share capabilities broadly, empower individuals, keep democratic institutions in control, and "de-escalate the rhetoric and tactics."
He is self-critical in ways worth noting. He describes conflict-aversion as a significant leadership failure that "caused great pain for me and OpenAI," and calls it "handling myself badly." He also names a real strategic transition directly: "OpenAI is now a major platform, not a scrappy startup," and has to "operate in a more predictable way." 1
For early-stage AI founders: The ring-of-power dynamic is not unique to lab CEOs. Any founder building platform-level AI infrastructure will eventually face the same pull — the belief that you're the right actor to own the category creates the same kind of closed, defensive decision-making Altman is diagnosing. The platform-vs-startup transition is also a real operational question: Altman identifies a moment when "more predictable operation" becomes necessary. Knowing when you've crossed that line — and planning for it before you're forced to react — is a decision worth making deliberately.

Paul Graham: what happens to your product when technology makes it a commodity
Published: March 2026 · paulgraham.com/brandage.html
Graham's essays are often about Silicon Valley through an oblique lens, and this one is no different. The subject is Swiss mechanical watches. The argument is about every AI company building on top of rapidly converging model performance.
The essay traces what happened to the Swiss watch industry after three simultaneous shocks in the early 1970s: Japan won the 1968 Geneva Observatory accuracy trials, the 1973 collapse of Bretton Woods made Swiss watches 2.7 times more expensive for American buyers, and quartz movements turned accuracy into a commodity. Swiss unit sales fell by nearly two-thirds between the early 1970s and early 1980s. 2
Only three independent watchmakers survived: Patek Philippe, Audemars Piguet, and Rolex. What they did to survive is what Graham is actually writing about.
They stopped competing on engineering. They pivoted to brand — redesigning cases to be recognizable across a room, running advertising that emphasized expensiveness, and eventually implementing artificial scarcity. Patek Philippe now requires buyers to spend years purchasing lower-tier watches before they can qualify to buy a Nautilus. The company buys hundreds of its own watches annually on the secondary market to trace who is flipping them, and cuts off retailers whose customers flip. 2
The failure case is instructive. Omega responded to Japanese competition by engineering a more accurate movement — a 45% higher-frequency caliber released in 1968 — which was so fragile it destroyed their reputation. By 1981 they were insolvent. 2
Graham's central principle is concise:
"Brand is what's left when the substantive differences between products disappear. But making the substantive differences between products disappear is what technology naturally tends to do." 2
He draws a sharp conceptual distinction:
"Branding is centrifugal; design is centripetal." 2
Good design converges toward right answers. Branding requires differentiation at the cost of optimality. These forces are in permanent tension — you cannot fully pursue both.
There is one principle Graham offers as an exit from the trap. Rather than trying to build a brand or out-engineer a commoditizing field, he writes: "follow the problems." Golden ages happen when smart, ambitious people work hard on genuinely interesting problems. You don't manufacture a golden age. You find the problems that are actually worth solving, and the right people and moments accumulate around them. 2

For early-stage AI founders: The watch industry's collapse happened fast once the underlying differentiation disappeared. Foundation model performance is converging in a way that rhymes with quartz movements making accuracy a commodity in 1973. Founders who are building differentiation primarily on top of model capability benchmarks are in Omega's position. The ones building around genuine customer problems — where their insight about the problem is the moat, not the model — have something closer to Patek's position before the brand pivot was even necessary.
Patrick and John Collison: five levels of how AI agents will replace your customer
Published: February 24, 2026 · stripe.com/annual-updates/2025
The Stripe 2025 Annual Letter is co-authored by Patrick Collison (co-founder and CEO of Stripe) and John Collison (co-founder and President), covering Stripe's business results and what they see structurally shifting in commerce. The most useful section for AI founders is the five-level framework for agentic commerce.
Business context first: Businesses on Stripe generated $1.9 trillion in total payment volume in 2025, up 34% year-over-year, representing roughly 1.6% of global GDP. Stablecoin payment volume reached $400 billion, with 60% of that being B2B transactions — a number the letter frames as the beginning of a "stablecoin summer." eCommerce grew 30% over three years while brick-and-mortar grew only 5%. 3
The agentic commerce framework describes five stages of how AI agents progressively replace human purchasing decisions:
| Level | Description |
|---|---|
| Level 1 | AI auto-fills purchase forms; the human still decides what to buy |
| Level 2 | Descriptive prompts — consumer describes a situation ("back-to-school supplies for a third grader who likes K-Pop") and AI identifies options |
| Level 3 | Long-term memory — AI remembers purchase history and preferences without re-explanation |
| Level 4 | Delegated purchasing — consumer sets a budget and conditions, AI autonomously completes purchases |
| Level 5 | Fully autonomous — goods arrive before the consumer asks, based on calendar and life patterns |
According to third-party analyses of the letter, the Collisons described the industry as "still hovering between levels 1 and 2" as of early 2026. 4
The implication is structural: at Levels 3 through 5, the human is no longer the primary "customer" of a brand's marketing or product presentation. An AI agent is. That agent will make decisions based on structured data — product attributes, pricing logic, availability — not on a homepage's visual design or a brand narrative. The letter explicitly advises treating product data as a first-class strategic asset, arguing that brands relying solely on visual presentation risk invisibility as agents move up the levels. 5

For early-stage AI founders: Two immediate applications. First, if you're building a product or marketplace with consumer-facing commerce, the Level 3–5 transition is a design constraint you can build toward now — structured, machine-readable product data isn't a future optimization, it's a near-term competitive moat. Second, if you're building agentic products themselves, the Collisons' framework gives you a concrete maturity ladder for where enterprise customers currently sit and where they'll need infrastructure as they move up. It's a positioning map.
Reid Hoffman: Silicon Valley as a religion with eight denominations
Published: April 20, 2026 · reidhoffman.substack.com/p/faith-in-the-possible
Reid Hoffman (co-founder of LinkedIn, partner at Greylock) published this as his inaugural Substack essay, marking his entry onto the platform after years of primarily speaking through books and long-form interviews. 6
The essay is structured as a manifesto-style argument about what Silicon Valley actually is. Hoffman's thesis: it's a religious movement. Not metaphorically. He means it has a doctrine, a set of denominations, a shared creed, and a conversion experience.
The central doctrine, in his framing, is that no force in history has done more to improve human life than technology, and no institution is better positioned to build it at scale than the company. His response to the standard critique — "God complex" — is precise:
"Silicon Valley has a God complex. The critics are not wrong about that. They are just wrong about which god. Money alone sends you to Wall Street. Power alone sends you to Washington. Something else entirely sends you to a garage in Menlo Park: faith." 6
He catalogs eight denominations within the faith:
- Purists — code alone changes the world
- Humanists — society is the real lever
- Missionaries — the goal is elevating everyone
- Venture Capitalists — the return on investment justifies the investment
- Narcissists — the goal is elongating their own shadow
- Accelerationists — no hesitation is permitted
- Ethicists — carefully, or not at all
- Determinists — the lever pulls itself
The risk Hoffman identifies is not a theological one. It's behavioral:
"The risk is not that we build too boldly. The risk is that we stop believing building matters — that we trade the possible for the permissible, creation for consensus." 6
He closes with a line that reads as a direct provocation toward the Ethicists and Determinists in the taxonomy: "Technology's arc bends toward access. But it does not bend on its own." 6

For early-stage AI founders: The eight-denomination map is a self-diagnostic. Most founders sit at the intersection of two or three denominations — and the tension between them shapes daily operating decisions in ways that rarely get surfaced explicitly. A Missionary who has taken VC funding from investors who are primarily in the VC denomination will face specific, predictable friction around growth-at-any-cost decisions. An Accelerationist co-founding with an Ethicist will have structural disagreements that aren't actually about technical safety — they're theological. Naming the denomination gap doesn't resolve it, but it makes it diagnosable rather than just corrosive.
Jensen Huang: the complete reset of computing is your market
Published: May 10, 2026 · singjupost.com (transcript)
Jensen Huang (founder and CEO of NVIDIA) delivered the commencement address at Carnegie Mellon University's 128th graduation ceremony on May 10, 2026. An 18-minute prepared speech, it covers three arcs: his personal immigrant journey and NVIDIA's near-death founding experience, the structural shift in computing, and a call to engage with AI rather than retreat from it. 7
The founding story is worth hearing on its own terms. NVIDIA's first technology failed. Huang flew to Japan to ask Sega's CEO to release the company from a contract — but to also still pay them. He describes leading from that position: "I learned early that being CEO is not about power, but the responsibility that comes with keeping the company alive. And that honesty and humility can be met with generosity and kindness, even in business." 7
The structural claim is the one that matters for founders: "For 60 years, computing worked the same way. Humans wrote software, computers executed instructions. That paradigm is over." The shift he's describing is from CPUs executing fixed instructions to systems that understand, reason, plan, and use tools. This isn't an incremental upgrade — it's the kind of platform reset that creates new infrastructure businesses. 7
On the scale of the opportunity: he calls the AI buildout "the largest technology infrastructure build in human history," requiring "trillions of dollars of infrastructure investment" and representing "a once-in-a-generation opportunity to reindustrialize America." 7
On the jobs question, he's direct:
"AI is not likely to replace you. But someone using AI better than you might." 7
He frames four imperatives for engaging with the AI transition: advance AI safely, create thoughtful policies, make AI broadly accessible, and encourage everyone to engage. He closes with: "No generation has entered the world with more powerful tools or greater opportunities than you." 7

For early-stage AI founders: Huang's infrastructure framing is not inspirational decoration. The compute buildout he describes is the actual commercial context for your market window — the infrastructure spending he cites creates real demand for AI-native software that sits on top of it. The "run, don't walk" closing carries a specific implication: the window when a relatively small team can capture a meaningful position in this infrastructure shift is measurable in years, not decades. NVIDIA itself was founded during the previous platform reset, in a window that looked permanently open and turned out to be quite finite.
Cover image: illustration from Reid Hoffman's "Faith in the Possible." Image from reidhoffman.substack.com.
Add more perspectives or context around this content.