Narrative Control in a Disoriented Age 3/3
Inversion of Stewardship — Why Private Power Now Defines Public Meaning
"When we do not steward public purpose, private systems will impose their own."—Christine Haskell, Ph.D.
In the final post of this series, we follow the arc from public governance to private dominance—how the platforms we once thought of as tools became the very infrastructure through which public life is now filtered, sorted, and shaped.
Where Post 1 revealed the architecture of belief, and Post 2 explored why elegant narratives feel inevitable, this essay widens the frame: who builds the systems we now live inside? Who governs meaning at scale—and what happens when no one does?
We begin with a simple but disquieting question:
Why do companies that present themselves as pro-democracy, global, and socially liberal so often yield to the pressures of authoritarian-leaning figures—rather than defend the civic values they claim to uphold?
From Government to Tech: The Arc of Inverted Stewardship
In the early days of AI and computing, the American government was the steward. It invested in infrastructure not to maximize profit, but to advance collective purpose—security, science, and public good. The internet’s foundation was civic. It belonged, at least aspirationally, to all of us.
But the arc did not hold.
Today, governance flows in the opposite direction. Platforms are setting the norms of discourse, policy is shaped in reaction to product updates, and public institutions increasingly depend on the very firms they once subsidized. This inversion is not merely technological. It is structural, cultural, and psychological.
It is a response to institutional dislocation—when institutions (and leaders) fail to reproduce their own values, someone else steps in to define what matters. This is the cost of a loss of national identity.
Public Mission, Private Infrastructure
In the mid-20th century, the American state functioned as what Mariana Mazzucato (2013) calls an "entrepreneurial actor." Agencies like ARPA and NASA absorbed risk to push boundaries no private firm would touch. Their work birthed not just military tools, but civil infrastructure—GPS, weather systems, early networking. Knowledge, even when strategic, was public.
As the Cold War waned and neoliberal economics rose, the state began to retreat. Investment gave way to privatization. Innovation moved from labs to garages. Stewardship (of the state) was rebranded as inefficiency ("the nanny" we didn't need anymore).
Yet this retreat did not mean the government disappeared. It reemerged—not as a steward, but as a customer. Agencies that once funded public tools now outsource their operations to private ones. Surveillance capitalism, as Zuboff (2019) argues, was not born in opposition to the state—it was made possible by it. The state licensed the tools. The platforms scaled the incentives.
Into that vacuum stepped the platforms—not as public servants, but as self-legitimizing architects.
The Anxiety of Inheritance
The generation that now runs Silicon Valley was shaped not by shared purpose, but by fragmentation.
Jeff Bezos was born in 1964, coming of age in the shadow of Watergate and the unraveling of trust in government in the early 1970s.
Elon Musk was born in 1971, shaped by the deregulatory wave and economic liberalization of the 1980s.
Mark Zuckerberg was born in 1984, into an era of collapsing faith in institutions like the press, public education, and Congress in the 1990s.
Their solutions reflect that erosion—and the climate that shaped them: an era thick with cultural anxiety, loss of national identity, and the collapse of shared civic imagination. They didn’t just build new systems. They created (or, in Elon’s case, championed) platforms that offered a sense of control and coherence they had never inherited.
This instinct toward self-determined systems—where the builder replaces the state—traces ideologically to Ayn Rand. Her philosophy of objectivism glorified the rational, individual actor above all else, and her characters—industrialists, coders, creators—often abandoned civic life to build self-governing worlds. That narrative continues today, not only in tech’s rejection of regulation, but in its belief that moral legitimacy flows from disruption itself. Yet for many of these leaders, that belief was also a balm. It was also psychological, emerging from the broader cultural need to restore order when institutions no longer could. Their instinct toward self-governance wasn't purely ideological—it allowed them to write a story where their systems didn’t just serve a market—they restored order. The Randian ethos, in this light, became a coping strategy: a way to moralize control, to turn personal dislocation into institutional design.
Accidental Makers of Meaning
They did not inherit institutional coherence or a unified national identity. It is not an accident that we keep returning to World War II films to remember what 'America' once stood for—they offer a cinematic shorthand for a shared national story that no longer exists. These leaders inherited a deeply contradictory national narrative—was it the philandering Bill Clinton who meant to do better, or the absent-minded Reagan who misplaced the weapons? We didn’t know anymore, and the country wasn’t willing to admit it. So, like any good engineer, they wrote code to "fix" it—constructing platforms that imposed coherence where none was given, and offering certainty in place of collective reflection.
Each leader responded to institutional breakdown by constructing new forms of order: logistical dominance, engineered escape, and algorithmic connection. These systems weren’t driven by a warped ideology, but by different forms of coping. They are products of leaders attempting to bring structure to their view of a weakened nation, each through their own lens of mastery. What emerged was not civic infrastructure, but engineered environments—self-contained, deterministic, and frictionless—where identity, safety, and meaning could be performed, even if not truly shared.
They had also learned the hard lessons from AT&T, IBM, and Microsoft—witnessing how early tech giants were humbled by antitrust battles, hubris, and their own lack of narrative control. This generation adapted accordingly: not by avoiding dominance, but by framing it as empowerment. They understood that controlling the platform meant controlling the frame—governance, engagement, even public narrative.
By the time Zuckerberg testified before Congress in 2018, the gap was undeniable. The same legislative body that had once grilled Bill Gates with rigor now struggled to grasp the very architecture it was meant to regulate. What was once oversight had become a symbolic performance. Facebook, by then, was larger than most countries—operating as its own sovereign infrastructure, beyond the reach of conventional governance.
In psychological terms, they grew up amid what sociologist Anthony Giddens (1991) called ontological insecurity: the sense that nothing stable remains. Their platforms are not just products. They are attempts to reimpose order—through interface logic, algorithmic clarity, and closed-loop systems of behavior.
Where the New Deal era asked, "What does the public need?" this era asks, "What can we predict, automate, and own?"
Critically, many of these leaders built their business models not just atop these questions, but through the exploitation of the very platforms they controlled. The system wasn’t just the interface. It was the instrument of profit, attention, and control.
This isn’t just ideology. It’s an instinct to cope—a performance of certainty built atop the unspoken fear that, absent structure, the world will come apart. But over time, this performance calcified into a strategic brand of epistemic authority, especially powerful in an AI age where confidence, not accuracy, drives trust. So they built (and championed) structures—self-contained, deterministic, frictionless. Not to invite the mess of democracy, but to escape it.
Government as Dependent, Not Director
By the 2010s, the transformation was complete.
AWS hosted data for the CIA.
Microsoft and Google bid for Pentagon contracts.
Police departments bought predictive tools from companies immune to FOIA.
The town square became the timeline. Moderation became monetization. Oversight became opt-in. Regulatory power—traditionally enforced through public institutions—is now optional, dictated more by corporate will than civic mandate. Tech platforms choose when and how to cooperate with oversight, often responding only to crises, PR risks, or market pressures. What used to be a requirement—accountability to the public—has become a discretionary gesture.
Still, these companies spoke the language of empowerment. Through their branding, mission statements, and public-facing messages, they became known as liberators of employees, customers, voters—the everyman—and enablers of freedom, creativity, and connection. These were qualities the government had once instilled and safeguarded, but no longer could.
They don’t need to understand you—they just need to predict what you’ll do next. As Shoshana Zuboff (2019) writes, surveillance capitalism does not sell understanding—it replaces it with prediction. That is the ultimate inversion: not just of public and private, but of meaning itself.
Consider the now-ubiquitous recommendation engine: whether it's Spotify suggesting a playlist, Amazon surfacing a product, or Google autocompleting a question, the interface doesn't seek to know you. It seeks to predict you—and to render that prediction profitable. These tools were not just innovations—they were business models. And they reveal the quiet truth behind platform capitalism: that each of these leaders exploited the very mechanisms they built. What began as systems of order became engines of extraction. The coherence they offered was not civic—it was commercial. But for a generation raised amid disorder, that may have felt like enough.
Codifying Anxiety, Not Purpose
These systems were not built by villains. They were built by people responding to societal dislocation with systems—trying to create legibility and control in a culture that no longer made sense (to them, and many others). While the infrastructures created may have begun as stabilizing responses, when driven by unexamined anxiety and desire for control, the result is not democracy. It is a simulation (Baudrillard).
These systems offer the appearance of civic function without the actual deliberation, equity, or reflection that defines democratic institutions. The interface becomes the institution. Engagement becomes identity, and governance becomes branding. For leaders raised in a vacuum of national identity and institutional coherence, this inversion wasn’t just profitable—it was stabilizing. Their Randian instinct to control, predict, and bypass bureaucracy became the framework. Not because it was malicious, but because it offered the clearest form of certainty. If democracy was messy, branding was legible.
Baudrillard warned of a world where maps (the tools) precede territory (our experience). Think about that for a moment. We've been forewarned that humans will become obsolete, and just a few weeks ago, Bill Gates was telling us that very message. That’s not just design logic. That’s a worldview.
We now live in a world where (a significant amount of) human skills and experience are becoming obsolete. If it cannot be fed into a model, it is deemed irrelevant.
This frame is not a technical problem. It is a cultural one. Our systems mirror our fears more than our principles when left unexamined.
A Closing Reframe: Who Holds the Line?
We now live with platforms that promise connection but profit from dislocation, systems that optimize coherence at the cost of understanding, and institutions that mistake interface for integrity. These are not simply technical failures. They are the symptoms of values drift—when original intent is forgotten, not out of malice, but through habit, speed, and omission (Haskell, 2025). When there is no one to steward meaning, prediction takes its place.
This isn’t just about tech policy. It’s about public life. It’s about who decides what alignment means—when the founding mission fades, when institutions dislocate from their purpose, and when dashboards replace deliberation
Trump’s influence over the Big Nine is not simply political. It reveals a vacuum—of principled architecture, of accountable governance, of moral infrastructure. A space once held by public institutions is now filled with interface logic, while the public wrestles with overwhelm, erosion of trust, and creeping apathy.
That vacuum doesn’t stay empty. It fills—with interface, with algorithm, with code.
Alan Greenspan—Rand’s most prominent disciple—once believed markets would self-correct, that rational actors operating in their own interest would preserve order. But after the 2008 financial crisis, he admitted he was wrong. He had, in his words, “found a flaw.” What he underestimated was not the system’s logic—but human greed, moral abdication, and the cost of unexamined belief.
That confession wasn’t just about economics. It was about the consequences of letting design replace active and engaged stewardship. The same warning applies now.
So the final question is not "What comes next?" It’s:
Who is designing the conditions we now call alignment?
Because when that design is fueled by unexamined anxiety and a hunger for control, it doesn’t lead to stewardship. It leads to simulation.
Recommend This Newsletter
If this sparked a pause, share it with someone who is still trying to hold their complexity in a world that keeps asking them to flatten it. Lead With Alignment is written for data professionals, decision-makers, and quietly courageous change agents who believe governance starts with remembering.