“The way to do a piece of writing is three or four times over, never once.” — John McPhee
Last January, I was invited to talk about becoming more proficient with AI. It’s August now, and people are grieving that the newest chatbots don’t “feel” as affectionate. That’s a good sign. The obsequious friend your mother warned you about—the one who’d “help you right off a cliff”? Without self-awareness, today’s systems will gladly do that.
The pace of the AI conversation has also warped expectations for scholarship. We’ve raced (too quickly, I would argue) from how would we use this? to why wouldn’t we use this? Academia isn’t immune to the hype.
Since April, I’ve served as guest editor for two special issues and lead editor for two books on the future of AI. I’ve read a mountain of abstracts. Many disappoint, but not for the reasons you might think.
What’s Happening in the Literature
A 2025 analysis of psychological science (spanning multiple subfields of psychological science) tracks constructs and measures over three decades. The picture isn’t flattering:
~39,000 novel constructs and ~43,000 novel measures since 1993.
Over half of those measures were never used again outside their debut paper.
Fragmentation has been trending up; some subfields lead the way.
Fig: Treemap plots for Personality and Social Psychology. Each tile shows one measure, labelled with its acronym if it fits on the tile. Larger tiles indicate that the measure has been used more frequently; the many smaller tiles show fragmentation.
In a landscape with dozens of near-duplicate constructs and scales that claim to assess the same thing, cumulative knowledge stalls. Novelty gets rewarded; boundary-setting and psychometrics get sidelined. Until journals prize reuse, refinement, and replication, not just invention, we’ll keep mistaking pointillism for progress. This isn’t only an academic problem. In uncredentialed communities (e.g., genealogy groups, local history forums, open-source projects), the quiet craft that keeps public knowledge usable is already strained by AI “slop”: invented relatives, phantom citations, misattributed records. Volunteers do the cleanup. That too is standards work: the patient labor of verification that keeps shared meaning from eroding.
The Default to Resist
As a new editor and long-time research-practitioner, I review every submission through the lens of: Is this helping leaders confront complexity, or helping them manage around it?
When AI is mediating decisions about justice, labor, identity, and access, leadership frameworks that prize comfort, tidy ethics, or performative “balance” can quietly reinforce the very harms they claim to address. A manuscript can be beautifully written, technically sound, and practically useful—yet still avoid naming power, risk, or structural complicity. If it sidesteps ethical tension or reframes rupture into a tidy learning tool, it may be doing more to preserve institutional legitimacy than to challenge it.
With authors, I respectfully press the premise. To move society and systems ahead, we have to move past box-checking toward greater integrity. A practical test I now apply: does this framework improve the data supply chain—capture, use, and aftercare (provenance, verification, deletion)—or does it simply add another checklist that exports maintenance to someone else?
Invisible Hands of Scholarship
“Publish or perish” isn’t just tenure folklore; it’s the water researchers swim in. Clocks tick—impact metrics, rankings, and grant cycles reward output and novelty. The content machine, now supercharged by AI, demands volume. Those incentives shape manuscripts. The same is true downstream. When hurried frameworks travel into practice, others inherit the aftercare: librarians and archivists, community moderators, and volunteer historians who trace sources, correct records, and retire bad data. Stewardship isn’t a feature request; it’s ongoing maintenance—often unpaid.
The pressure is understandable. Careers depend on lines in a CV, but discernment is non-negotiable. If we accept volume as virtue, we harden fragmentation and export it into practice. This is especially dangerous when AI-adjacent frameworks are quickly adopted in workplaces that affect justice, labor, identity, and access.
Editors and reviewers feel the same pull: more papers, faster turns, broader calls. Yet the tasks that secure quality remain largely unseen and underpaid. Deep reading, genuinely constructive developmental feedback, and careful synthesis take time because getting it right still carries a cost. The invisible hands that uphold the social contract of scholarship belong to early-career scholars, contingent faculty, and volunteer practitioner-authors, who absorb most of that cost today. When these same people resist top-down edtech mandates or call out performative efforts (e.g., false detectors, bans), they often carry the job risk for everyone else—signing open letters in silence, or not daring to sign at all. Raising the bar can’t rely on the least protected to hold it up; it has to be a shared act.
Inside the Process
Between margin notes, calls with anxious authors, and late-night rereads, you hold someone’s learning journey—and your own. You protect what’s forming while asking it to get clearer. You coordinate peer review, reconcile competing notes, and uphold a quality bar no one can see but everyone can feel.
An editor’s voice isn’t a headline; it’s a handrail. It asks: What is the claim? What belongs? What can be left unsaid so the core can be heard? Sometimes the right move is a scalpel; sometimes it’s silence. Either way, the work serves the piece—not the editor’s or the author’s ego.
Beneath it all is validity: authorship, evidence, and where—if anywhere—AI belongs in the task. You also hold what others don’t see: the writer clinging to a draft for the right moment; the 2 a.m. thank-you after life has intervened; the occasional entitled indifference that reminds you why standards matter. The old line is true, “Only doctors and editors see behind the curtain,” and we don’t tell. What we learn stays in the margins.
Outside formal education, the same quiet standards work is happening in public knowledge commons. We owe those stewards aftercare, not just output.
Gratitude—and a (First) Launch
This special issue took over six months and began with more than twenty-five abstracts. Five made it through—not because the rest lacked merit, but because clarity, fit, and timing aligned. To the peer editors who labored with care, and to the authors who trusted me while their ideas were still fragile: thank you.
Cultivating an editorial voice while others find their writing voice is quiet work. It’s also a privilege.
Link to the special issue in the comments.
Leaders who don’t design for reflection inherit ritual. Agency breaks the pattern. Governance sustains progress.
Well said Christine! However, I would add a slight edit: “good Governance professionalism sustains progress”
https://onlinelibrary.wiley.com/toc/1935262x/2025/19/2