In the first episode of Pharma Decrypted, host Sabine Louët, founder of SciencePod, sat down with Alexander Jakobson, founder of CAIRO, to unpack why pharma content is failing in the age of AI – and what teams can do about it. Jakobson’s company specialises in measuring and analysing how AI models understand and represent pharmaceutical products, giving him a front-row seat to the gap between pharma’s content intentions and what patients and HCPs actually receive.
Here are the key takeaways from the conversation.
1. Your Content Objective Has Shifted – Whether You Realise It or Not
Before now, the pharma content playbook was relatively straightforward: create high-quality material and push it through your controlled channels such as website platforms, email campaigns, SMS, WhatsApp, or gated portals. The goal was visibility and engagement within the environments you could manage.
That model is now outdated.
The goal today? No longer just reaching your audience directly. It’s ensuring that AI systems understand, surface, and accurately represent your content when questions are asked about your indication, your therapeutic area, or your brand by name. AI platforms have now become the intermediary between your content and the people who need it.
Why does this matter? Because within highly-regulated markets like Europe, pharma companies often start with unbranded content to attract attention, reserving branded material for their gated areas. But AI models can’t get past these gates. So what happens then? They synthesise whatever information they can find, or fabricate what they can’t, and present it to users with the same confident tone regardless of the source’s authority.
The implication is clear: if you’re not actively shaping the information landscape that AI models draw from, someone else, or the AI itself , will do it for you.
2. Pitfalls That Are Costing You More Than You Think
One of the most striking revelations from the webinar was Jakobson’s case study of a targeted oncology drug. This product had been approved by the FDA and was actively on the market. Yet, when tested across multiple AI platforms six months after launch, it had zero brand recognition. None of the major AI models knew the drug existed.
Worse still, when the models were prompted about this drug’s contraindications, they didn’t simply say they lacked information. They fabricated contraindications – inventing clinical details that could influence prescribing behaviour and alarm patients searching for information about their treatment.
This example illustrates the three key pitfalls pharma companies face in the AI era:
1. Incorrect answers. AI models confidently generate wrong information about your product. In one test, a model initially denied the drug existed, then when pushed, started offering dosing advice for that very same drug. The inconsistency is alarming but predictable when the models lack a true authoritative source material.
2. Omissions. Even when a model partially represents your product, it may not surface the benefits that differentiate you from competitors. If your key clinical advantages aren’t clearly defined and making it into AI-generated responses, you’re effectively invisible at the point of decision-making.
3. Misrepresentation relative to competitors. Without your input, AI models will construct their own narrative about how your drug compares to others in your therapeutic area. That narrative may not align with the clinical evidence.
A crucial insight here is that these AI models have a global representation of any given drug. Unlike traditional search, which can be shaped by local regulations and market-specific strategies, an AI chatbot will give broadly similar answers whether the user is in Europe, Asia, or the United States. Local regulatory strategies around what information to publish don’t translate to AI platforms in the way they do to traditional channels.
3. The Secret Ingredients: Measurement, Speed, and “Share of Question”
So how do pharma teams get ahead of this? According to both Louët and Jakobson, the answer lies in a fundamental shift in operating model, from periodic publishing to continuous monitoring and rapid response.
From “Share of Voice” to “Share of Question”
One of the most useful concepts introduced in the discussion was the idea of “share of question”. Traditionally, pharma teams focused on share of voice: how visible their brand was across channels. But in a world where AI generates answers on demand, what matters is whether your content is the one informing the response when a relevant question is asked.
This means pharma companies need to understand what questions patients and HCPs are actually asking chatbots about their therapeutic area. They then need to identify where AI answers are incomplete or inaccurate, and position themselves as the authoritative source the AI draws upon to fill those gaps.
It’s a hidden opportunity in plain sight. Where AI models produce weak, generic, or incorrect responses to specific clinical questions, there’s a content gap waiting to be filled by the company with the most relevant, most authoritative answer.
There Is No One-Size-Fits-All Recipe
Jakobson was careful to note that there’s no universal formula. This is partly because AI models are effectively black boxes. You can observe what goes in and what comes out, but you can’t see the internal mechanics. However, a few principles consistently hold:
- Make your content discoverable to AI. This sounds obvious, but it’s not how most pharma content is structured. If your material is locked behind authentication walls or embedded in formats that AI crawlers can’t easily parse, it won’t be ingested.
- Shape content around the questions being asked. Rather than publishing raw clinical data and hoping the AI can reformulate it into useful answers, consider structuring content in an FAQ format that directly addresses the questions your patients and physicians are asking.
- Measure continuously, not periodically. Because AI models are probabilistic – meaning the same question asked twice can yield different answers – a single snapshot tells you very little. As Jakobson put it: “They’re statistical systems, so you need statistical methods to measure them.” Continuous monitoring reveals how your brand representation drifts over time and whether your content interventions are actually working.
Speed Matters More Than Ever
One of the sharpest points in the conversation came when Jakobson highlighted the speed mismatch between pharma content production and AI response generation. If a new safety concern, a rumour, or new interpretive data emerges today, the AI will generate an answer instantly. It won’t wait for the manufacturer to publish guidance next quarter.
This creates a real problem for pharma teams accustomed to lengthy MLR (medical, legal, regulatory) review cycles. The time between conceiving a piece of content and getting it approved and published can be substantial – and in the meantime, the AI narrative around your product is being written without you.
Louët suggested several practical approaches to close this gap: greater reliance on pre-approved templates and claims libraries, using AI tools to generate first drafts that are then refined and approved by human editors, and building a more agile content workflow that can respond to gaps identified through continuous monitoring.
The key principle is that silence is no longer a safe strategy. In the pre-AI era, a pharma company might have chosen not to comment on a topic as a way of controlling the conversation. Today, that silence is immediately filled. What fills it may be fabricated, inaccurate, or shaped by competitors.
4. Why Pharma Content Fails – And the Path Forward
The reassuring news from the webinar is that pharma content doesn’t fail because it’s bad. Most pharmaceutical content is carefully researched, clinically accurate, and well-written. It fails because it was designed for a world that no longer exists.
Traditional pharma content was built for controlled channels where the format, context, and audience were known. A PDF on a gated portal. An email to a segmented HCP list. A carefully crafted website page optimised for search engine rankings. These channels gave teams control over how their message was presented.
The AI era strips that control away. Your content is now consumed, deconstructed, and reassembled by language models that don’t distinguish between a regulator-approved label and a forum post that merely looks authoritative. When AI-generated content starts feeding other AI models – a phenomenon that’s already happening – the potential for narrative drift and compounding inaccuracies grows exponentially.
This is why human oversight remains non-negotiable. As Jakobson put it plainly: AI can generate content, but it can’t approve it. A language model doesn’t understand the hierarchy of evidence. It can’t tell the difference between a response that sounds plausible and one that’s supported by clinical data. The responsibility for accuracy and compliance still rests with the people and organisations publishing the content.
The Real Recipe for Disaster
The discussion converged on a clear warning: the greatest risk isn’t getting your content strategy wrong. It’s not having one at all for AI channels. Ignoring what AI models are saying about your products and hoping for the best is a recipe for disaster both commercially and from a patient safety perspective.
Pharma companies also have an inherent advantage they’re underutilising. They sit on the actual clinical data. Everything else, be it downstream interpretations, third-party summaries or anecdotal content is derivative. Phase III clinical trial data, real-world evidence, regulatory submissions: this is the most authoritative source material available. Question-oriented content, written in a way that answers the questions HCPs and patients are actually asking, is where the opportunity lies.
Key Takeaways
- Your audience has moved. AI chatbots are the new front door for medical information. If your content isn’t influencing AI-generated answers, you’re not part of the conversation.
- Silence gets filled. In the absence of authoritative content from manufacturers, AI models fabricate answers. They will invent contraindications, dosing advice, and competitive comparisons. This isn’t a theoretical risk. It’s happening today.
- Think in questions, not channels. The shift from “share of voice” to “share of question” is the defining strategic pivot for pharma content teams in 2025 and beyond.
- Measure like a scientist. AI models are probabilistic. A single query tells you nothing. Continuous, statistical monitoring of how AI represents your brand is the foundation of any effective response.
- Speed is survival. If your content production cycle can’t keep pace with the speed at which AI generates narratives about your product, you’ll always be playing catch-up. Templates, claims libraries, and AI-assisted drafting with human editorial oversight are the tools that close the gap.
- Humans in the loop, always. AI can draft content. It cannot validate it. The hierarchy of evidence, regulatory compliance, and clinical accuracy all require human judgement. This isn’t a limitation – it’s pharma’s competitive advantage.
The views expressed in this interview are the speaker’s own.
Interview conducted by Sabine Louët. Edited from the interview transcript for clarity and length.
