Google AI Overviews and Healthcare Charities

Google AI search results and the health sector

Narrative IndustriesThoughts

Or… If Google Doesn’t Trust You, Why Should Anyone Else?

In May 2025, Marie Curie, the Patient Information Forum and Macmillan Cancer Support, published a report of a roundtable discussion between 70 organisations from across the health sector, who all shared concerns about Google AI Summaries (also known as Google Generative Results, GSE, or Google AI Overviews) in relation to healthcare search results.

The biggest concern in the report relates to Google AI Overviews showing the wrong information – sometimes wildly wrong, sometimes just “not quite right.” 

You can (and should) read their report before reading this response to it.

Some of their concerns are valid & important, some feel a little, well, naive.

Their proposed fixes:

  • Suspend AI summaries on health topics in the UK.
  • Restrict results to NHS-approved sources.
  • Geo-prioritise UK organisations.
  • Only show verified healthcare content.

It’s a noble-sounding wish list. But it also reveals a worrying blind spot. Because none of this is how the internet works. And frankly, it’s not how Google works.

If you’re applying for a grant, you speak the language of the grant funding body. The applicants who speak the grant funding body’s language best, who demonstrate that they tick the right boxes, will come away with the grants. Similarly, if you want to be heard by a machine, you have to learn its language. And right now, most charities aren’t speaking that language.

It’s Time for Some Tough Love for a Tough Topic.

Google’s AI Summaries and the Digital Skills Gap in Healthcare Charities

The Authority Illusion

Let’s be honest: many people believe their content should be seen as authoritative. But being an expert isn’t enough. You also have to show you are an expert to a reader, to a policymaker and yes, to a search engine.

The report acknowledges the lack of good quality information out there, and that Google can only work with what’s available to it on the internet. So, it’s up to the organisations themselves to provide the good quality information, or someone else will provide lower quality information.

That may sound harsh, but that’s how the internet works – over 5 billion people have access to the internet globally, and any of them can claim to be anything.

Google doesn’t know if you’re the UK’s leading voice on rare blood cancers, or the blog of a very persuasive flat-earther. It doesn’t “understand” trust in the human sense. It infers it – based on hundreds of signals that indicate your site is consistent, accountable, up to date, meets industry standards, and respected by others.

Google is trying to stop misinformation by de-ranking the “trust me, bro” websites, and favouring well-researched, SEO-friendly sites with citations, & external links to sources. There are specific trust signals that convey experience, expertise, authority & trust, and if those trust signals are missing from a healthcare website, it is unlikely to appear in searches (or AI Overviews).

It’s why a well-researched, SEO-friendly website with external-links to respected sources, that provides a good User Experience and is written by an amateur might get shown in AI Overviews (or rank highly in SERPs), before an error-filled website run by a healthcare charity does. The amateur went to the extra effort of ticking the boxes, and demonstrating why they should be trusted, while the national charity didn’t make that effort.

Humans behaviour & the Machine Heuristic

There’s also another, entirely different factor at play here: the machine heuristic. Without going into great detail, humans have a built-in cognitive bias that makes them assume machines are cold, logical, objective, evidence-based, un-biased and, therefore, more trustworthy than a human. As a result, humans can put more faith into an AI’s responses than it ever deserves. This is relevant to the subject of AI Overviews, but it’s behavioural & nuanced, so we’ll leave the detail for another day.

Crisis UX

Another concern in the report relates to the risk that inaccurate or unsuitable AI Overviews may have on people in crisis – and crisis UX is an area of user experience Narrative is particularly interested in. 

The report suggests that any search deemed to be from a person in crisis should provide “support options at every stage”. That’s a pretty broad & vague statement.

However, Google has been detecting personal crisis searches, and displaying support options, since at least 2011. In 2022, it announced it was upgrading to use one of its latest AI model’s, called MUM, to detect “Detect Personal Crisis Searches” and display relevant links & support contacts. Last time I checked, it was limited to certain types of personal crisis searches (mainly domestic violence, sexual assault, suicide-related, & substance abuse) but there is a system in place.

Who Gets Approval?

The report’s other proposals – suspend AI summaries, use only NHS sources, geo-block international content – are understandable. But they’re also deeply flawed. They rely on the idea that Google is both omniscient, obedient, and an arbiter of truth. It’s none of those things.

And as for only using NHS-approved content? Be careful what you wish for. If that happens, many smaller charities – especially those supporting rare and niche conditions – will be locked out completely. It will become a race of who can play Google’s game best & who can afford to pay for the certification. Without support or digital know-how, many of the most needed voices will be silenced by the report’s proposals.

Healthcare Charities & Organisations Have a Path to Approval – They’re Just Not Using It

Ironically, many of the things the charities are asking for – checking for trust signals, verified sources, credible content – already exist in a pragmatic, and fairly reliable way. Google has spent years developing a framework for this exact purpose and there’s two main parts to it. 

One part is called YMYL and that categorises content that falls under sensitive topics like health, finance, safety & security. It’s a cringeworthy but quite accurate acronym that stands for Your Money or Your Life and anything in this category will have to demonstrate that it knows what it’s talking about. 

The way Google assesses if content can be trusted is called EEAT – Experience, Expertise, Authority and Trust. If your website’s content is YMYL (like healthcare) you need to clearly demonstrate those EEAT factors in the way the search engine understands to stand a chance of getting ranked in search engine results. If you demonstrate EEAT well, you might even show up in the AI Overviews.

But that takes effort. Not just well-written articles, but clean, hierarchical HTML, basic accessibility, accountable/reputable author bios, internal & external links, transparent sourcing, and technical hygiene. And that’s before we get into things like structured data & topic clusters. It’s not sexy, it’s just good housekeeping – and it works. We’ve helped healthcare charities do it. And their content is appearing (accurately) in Google’s AI summaries as a result.

One charity in the report complains about not being cited despite publishing “accurate, up-to-date content.” But a quick check of their website revealed a lot of pages with missing meta descriptions, missing or duplicate H1 tags, no schema markup/structured data, and a poor accessibility score that you certainly wouldn’t want to shout about. 

From Google’s perspective, that’s like going to a GP who is based in an old shack with paint peeling off the sign, the wrong phone number, and a door that doesn’t open unless you give it a good tug…  that GP might be brilliant but it isn’t the kind of place you’d expect to find an experienced expert with authority & trust to be working from. 

Ironically, the Patient Information Forum’s own PIF Tick – a trust mark for verified UK health information – makes it unnecessarily difficult for certified organisations to demonstrate that valuable trust signal. PIF Tick does not offer individual profile pages for certified members to link to. Instead, there’s directory of certified organisations and individuals buried within multiple expandable-accordions on a single webpage, or available as a downloadable PDF. Neither of those formats are ideal when a visitor to the website (or search engines) want to verify a member’s credibility. This is frustrating because it’s avoidable, and easily fixed.

The Real Issue No One Wants to Say Out Loud

Here it is. The real, uncomfortable truth: the biggest risk to digital trust in the third sector isn’t AI – it’s the sector’s lack of digital literacy.

Overall the report into Google AI search results and the health sector is clearly well-meaning, and it flags valid concerns, but it also shows a disturbing lack of relevant knowledge & digital literacy when it comes to addressing the concerns.

It’s hardly news – The 2024 Charity Digital Skills Report found that 50% of UK charities have no digital strategy at all, and that figure has hardly changed in five years. A third don’t use data to inform decisions, and over 30% admit to being “poor” at managing the data they do have.

That’s not an AI problem. That’s a skills gap.

What Needs to Happen Now

If healthcare charities really want to influence what AI shows people – or even what Google shows people – they need to invest in how their organisation presents itself online. That means taking EEAT seriously. That means looking under the hood of their website and content. That means training their team. Or bringing in help from people who’ve done this before.

Google’s summaries are far from perfect. They will sometimes show entirely wrong information, the wrong source, or a not-quite the nearly-right answer. But they are getting better, and they aren’t going away, and now is the time to get the right, accurate content into them.

Charities have a choice: they can keep hoping the rules will change. Or they can learn to play the game.

If your organisation has the real-world experience, the credibility, the expertise, then it’s time to start looking like it online too.