The Scarcity of Trust in the Age of AI

The Scarcity of Trust in the Age of AI: Why Human Expertise Anchors the Welfare Footprint

Wladimir J. Alonso

In a recent analysis, Nate B. Jones, a sharp observer of the evolving landscape of artificial intelligence, posed a compelling diagnosis: as AI creates an abundance of content, it simultaneously creates a scarcity of trust.

In Why the Smartest AI Bet Right Now Has Nothing to Do With AI,” Jones does not argue that value lies in producing more information. Instead, he points out that as the cost of generation collapses, the real bottleneck shifts elsewhere: to credibility, accountability, and trustworthy judgment. He describes a future where organizations that function, in effect, as “trust banks”—institutions that can authenticate, certify, and provide a reliable signal amidst the noise—are the ones that endure.

The Currency of Credibility

This framing resonates deeply with our work at the Welfare Footprint Institute. We attempt something inherently demanding: quantifying lived affective experiences (Pain and Pleasure) across species and systems to guide real-world decisions. In this domain, trust is not optional. If our estimates are not credible, they cannot inform policy, advocacy, or reform—and suffering remains unchanged.

Jones’ core insight applies directly here: AI dramatically lowers the cost of producing analyses, narratives, and even data-like outputs. But it does not lower the cost of being right. If anything, it raises the premium on institutions and people willing to say, “This is our estimate, and we stand behind it”, and to bear responsibility for that claim.

Our Strategy: Judgment Over Generation

Connecting this “trust deficit” to our own work clarifies why we operate the way we do. Our strategy is built around accountable human judgment, empirical evidence, rigorous scientific methods, and radical transparency.

In an era where plausible text, figures, and explanations can be hallucinated in seconds, we invest in the slow, difficult work of validation. Every estimate we publish is grounded in evidence, documented assumptions, and expert scrutiny. That is not an inefficiency of our process; it is the core of its value.

The Role of AI: Powerful, but Not in Charge

None of this reflects pessimism about AI. Quite the opposite. We see AI as an extraordinary opportunity to augment human capacities—much like calculators, computers, and the internet did before it. Tools do not bear responsibility. Humans do.

We use AI to assist and to explore: to accelerate literature synthesis, test hypotheses, and prototype analyses. For example, we have developed several AI-assisted tools—including custom GPTs such as Hedonic-Track, Zootechnical Mapper, Interspecific Affect, and the Affect Map—designed to support expert-driven welfare analysis within the Welfare Footprint Framework. These systems help us think faster and broader, but they do not replace judgment.

We do not outsource responsibility to algorithms. At the end of the chain, humans remain the ones accountable for what is claimed, measured, and acted upon. As Jones notes in the video, knowing which option is right—and being willing to stand behind it—remains human terrain. In a world of artificial abundance, the most valuable resource is still authentic, accountable judgment. That is the foundation on which the Welfare Footprint analyses are built.

Finally, this is not incidental: for those who watch the interview in full, the video also contains  sharp insights on career strategy and entrepreneurship—particularly on how to build durable value in a world saturated with AI-generated output.