A robot scientist

The Machine Heuristic – or why we trust computers like they’re gods, not gadgets

Narrative IndustriesThoughts

Imagine you’re in a hospital waiting room. You are given exactly the same advice from two different sources. One is a friendly volunteer in a fleece. The other is a very expensive-looking robot with a calm voice and flashing lights. Most people, rationally or not, trust the robot.

This is called the Machine Heuristic. It’s a behavioural shortcut, a kind of cognitive autopilot. We’re wired to assume that everything a machine does must be logical, objective, & their lack emotion means they have no bias so, therefore, they must be more accurate & trustworthy.

It’s the same logic that makes people assume the website at the top of Google’s search results must be the best website, when it’s more likely to be the website with the best SEO. Or that an answer from ChatGPT must be reliable simply because it’s delivered by a machine in complete sentences. Computers seem rational, and we conflate rationality with “truth”.

Now, apply this to healthcare. A young parent Googles a worrying symptom and gets an AI Overview – a machine-generated summary stitched together from whatever the algorithm is configured to deem ‘high quality’. The summary sounds confident, reassuring, complete. So they stop looking. They trust it.

But an algorithm is, fundamentally, just a step-by-step set of instructions for solving a problem or accomplishing a task, and Artificial Intelligence is even more complex decision making that can introduce all kinds of bias & misunderstandings (or even hallucinations)… in many ways AI is more like a human

The machine isn’t judging “truth”, it judging consensus based on it’s algorithms and, crucially, what it can find. If the most credible charity on the topic hasn’t optimised its content for trustworthiness in Google’s eyes, it gets overlooked for someone else who did optimise theirs. A slick wellness blog that ticks the technical boxes might get featured – not because it’s medical data is “better” but because it’s met the requirements for the machine.

This isn’t Google’s fault. And it’s not the user’s either. It’s the result of designing systems that reward signals, not substance.

So the real danger isn’t that machines lie. It’s that we trust them too much, and fail to spot when they’ve stitched together something that sounds plausible but lacks depth, nuance, or proper expertise.

For third sector organisations, particularly in health, the implications are serious. If you’re not visible in the digital sources these machines are trained on or, worse, if you’re visible but don’t convey that you’re authoritative, then your voice disappears from the conversation.

If you’ve ever seen a TV advert for toothpaste showing someone in a white lab-coat with a clipboard, or an advert for a financial service showing someone who looks like an old-school bank manager, that’s because those adverts are crafted to give off “trust signals”, and create the impression we can can trust whatever the ad is saying. It’s marketing exploiting the same human cognitive bias.

Google and other search engines are just as susceptible except they have formulas to assess the trustworthiness of a webpage (in Google’s case it’s called E.E.A.T.). And those who make the effort to give off the right “trust signals”, especially in healthcare, finance, safety & security topics, are more like to be shown in search results.

And this is why your website isn’t just a communications tool. It’s a proxy for your reputation. And your metadata, authorship, structure, and schema markup, internal & external links are no longer technical frippery; they’re trust signals. Just like that person wearing a white lab coat with a clipboard in a toothpaste advert.