I used to think this was the beginning of the story.
That sentence — Louise Banks’s voiceover in Denis Villeneuve’s Arrival — might be the most precise articulation of what the SEO industry is experiencing right now. We thought we knew the narrative. Rankings. Traffic. Clicks. Conversions. A sequential logic as comforting as it was predictable. Publish content. Build links. Climb the SERPs. Harvest the clicks.
Then the ships arrived.
AI Overviews. AI Mode. ChatGPT Search. Perplexity. Claude. Machines that don’t rank pages; they synthesize answers. And suddenly, the story we thought we were living — the one where position #1 meant visibility, where traffic was the proxy for success — turned out to be something else entirely. Not a memory. A premonition.
Arrival is, on its surface, a first-contact film. But beneath that surface, both the film and Ted Chiang’s original novella Story of Your Life are something far more precise: a meditation on what happens when a fundamentally different intelligence forces you to restructure how you perceive reality.
Not your tactics. Not your tools. Your cognition itself.
That is exactly what AI search is doing to us. And the practitioners who refuse to learn the new language will suffer the same fate as the generals in the film who misread the signal.
Learning the language: the Sapir-Whorf shift from keywords to entities
The central mechanism of both the film and the novella is linguistic relativity — the Sapir-Whorf hypothesis.
Louise Banks, a linguist recruited to communicate with the alien Heptapods, doesn’t merely translate their language. She learns it. And in learning it, her cognition transforms. Her internal monologue shifts from what Chiang calls “phonologically coded” thinking — sequential, word-by-word, the way humans speak inside their own heads — to “graphically coded” thinking: holistic, simultaneous, relational.
This is not a metaphor for what AI search demands of SEOs. It is what AI search demands of SEOs.
For two decades, SEO thinking was phonologically coded.
We thought in keywords. Sequential strings of text. One query at a time, one ranking at a time, one page at a time. The entire discipline was organized around matching strings to pages, pages to positions, and positions to clicks. It was linear, causal, and comfortable.
Entity-based thinking — the kind that Knowledge Graphs, structured data, and LLM reasoning require — is graphically coded. It is holistic. It operates on relationships between concepts, not on the sequential occurrence of words.
When an LLM processes a query, it doesn’t walk through a ranked list of pages from top to bottom. It perceives a web of entities, their attributes, their relationships, and their consensus-level credibility across the entire information landscape. Simultaneously. The way a Heptapod perceives a sentence: all at once, before the first stroke is written.
Kevin Indig captured this Sapir-Whorf moment with uncomfortable precision when he admitted: “I catch myself all the time having the reflex to solve an AI search problem using classic SEO thinking. And it’s like, ‘Oh no, wait. I have to take a step back here.'” (See the conversation I had with him in this episode of The Search Session.)
That confession is the SEO equivalent of Louise Banks staring at a Heptapod logogram for the first time and realizing that everything she knew about language was a subset of a much larger truth.
When Amit Singhal introduced Google’s Knowledge Graph in May 2012, he framed it with a deceptively simple inversion: “things, not strings.“
The SEO community later paraphrased this as “strings to things” — emphasizing the journey rather than the destination. But both formulations describe the same cognitive shift: from phonological to graphical, from sequential to simultaneous, from keywords to entities. From Heptapod A to Heptapod B.
The data makes the shift visceral. An updated Ahrefs study from February 2026 found that AI Overviews now correlate with a 58% lower average click-through rate for the top-ranking page, which is up from 34.5% just ten months earlier. SISTRIX’s March 2026 analysis of the German market found that Position 1 CTR collapsed from 27% to 11%, an estimated 265 million organic clicks lost per month. Meanwhile, Google’s AI Mode — reaching 75 million users as of December 2025 — ends without any click to an external website approximately 93% of the time.
The old language isn’t dead. But it is rapidly becoming insufficient.
Seeing the whole path: Fermat’s Principle and the Teleological Inversion
This is where Chiang’s novella becomes indispensable, because the film, for all its elegance, streamlines the physics that makes the allegory structurally airtight.
In Story of Your Life, Chiang builds the entire narrative on a real principle of physics: Fermat’s Principle of Least Time. A ray of light, when passing from air into water, bends.
Humans explain this causally as the light hits the water and refracts according to Snell’s Law. Step by step. Cause and effect.
But the same phenomenon can be explained teleologically, through variational principles. The light “chooses” the path that minimizes travel time as if it knows its destination before it starts. The ray doesn’t calculate sequentially. It perceives the entire journey simultaneously and selects the optimal route.
Chiang’s genius is to extend this from optics to cognition. The Heptapods don’t think causally. They think teleologically.
As Chiang writes: “Humans had developed a sequential mode of awareness, while heptapods had developed a simultaneous mode of awareness. We experienced events in an order and perceived their relationship as cause and effect. They experienced all events at once and perceived a purpose underlying them all.”
And here is where the allegory becomes surgical.
Traditional SEO is causal physics. You publish content (cause). You build links (cause). You climb the rankings (effect). You get clicks (effect). You convert (effect). Each step follows the previous one in a chain of cause and effect, and the practitioner walks that chain sequentially.
AI search is variational physics. The system starts from the answer — the user’s need, the endpoint — and works backward to find the optimal combination of sources that will produce that answer.
Google’s query fan-out mechanism does this literally: a single user query spawns dozens of sub-queries, each seeking the fastest path to the most complete, most accurate response. Like Fermat’s ray of light, the AI doesn’t walk through the rankings sequentially. It perceives the entire information landscape and selects the optimal path.
The evidence? Ahrefs’ March 2026 study — 863,000 keywords, 4 million AI Overview URLs — found that only 38% of AI Overview citations come from pages ranking in the traditional top 10 organic positions. Down from 76.1% in July 2025. Fully 62% of citations now come from pages ranking below position 10, with 31% coming from beyond position 100 entirely. BrightEdge’s February 2026 data put the overlap even lower, at approximately 17%.
The AI doesn’t ask “what ranks first?” That’s the sequential question, the causal question, the human question. It asks, “What combination of sources produces the optimal answer?” That’s the teleological question. The endpoint determines the path.
In Chiang’s novella, Heptapod elementary mathematics is the calculus of variations; what is graduate-level physics for humans is basic arithmetic for them. The implication for our industry is uncomfortable but precise: what feels like advanced strategy to us — entity optimization, knowledge graph architecture, distributed credibility — is the basic arithmetic of how AI systems naturally operate. We are not inventing something new. We are finally learning the native language of the machines.
Two languages, one Brand: Heptapod A, Heptapod B, and the Semasiographic divide
Here is where the allegory produces its most actionable framework.
Heptapod A is the aliens’ spoken language. It is sequential, phonological, and roughly analogous to human speech. Heptapod B is their written language.
It is — and this is critical — semasiographic: it conveys meaning directly, without mapping to speech.
Unlike every human writing system, which is glottographic (a visual representation of spoken language), Heptapod B encodes meaning through spatial relationships between elements. Non-linear. Holistic. Two-dimensional.
The crucial detail: Heptapod A and Heptapod B are grammatically unrelated. The written system is not derived from the spoken system. They are parallel, structurally independent languages that happen to be produced by the same intelligence.
Our websites have two languages, too.
Our content — our articles, our product descriptions, our blog posts — is Heptapod A. It is sequential, speech-derived, and written for human cognition. People read it word by word, paragraph by paragraph, in order.
Our structured data — our Schema.org markup, our JSON-LD, our Knowledge Graph signals — is Heptapod B. It is semasiographic. It conveys meaning not through sequential text but through structured relationships between entities, properties, and values.
A JSON-LD @graph with @id references creates what I call a small internal knowledge graph, where meaning emerges from the relationships between entities, simultaneously, the way a Heptapod semagram encodes an entire sentence in a single continuous form.
And like the Heptapod languages, these two systems are grammatically unrelated. Schema markup is not a “translation” of your page content. It is a parallel, structurally independent language that communicates with a fundamentally different type of intelligence.
Now, a necessary precision. I want to be clear about what structured data does and what it does not do in the current AI search landscape, because the industry is drowning in overpromises on this topic.
There is a widely cited statistic claiming that GPT-4 accuracy jumped from 16% to 54% when supplemented with structured data. That study — by Sequeda, Allemang, and Jacob (November 2023) — tested something quite specific: Knowledge Graph ontologies applied to enterprise SQL database querying. It was about SPARQL vs. SQL, not about Schema.org on web pages improving LLM citation rates.
No version of that study has been replicated with current-generation models (GPT-5.3/5.4, Gemini 3.1, Claude Opus/Sonnet 4.6) or applied to web markup. As Aimee Jurenka wrote on Search Engine Land on March 25, 2026: “To date, there are no peer-reviewed studies on schema’s impact on AI search visibility.”
Does this mean structured data is irrelevant? Absolutely not. But its value is not as a direct lever for AI search visibility.
Its value is architectural.
Structured data is the connective tissue between your brand entities and the Knowledge Graph. It is how you make explicit what is implicit in your content, aka the relationships between your organization, your products, your authors, your locations, and your expertise.
It is the Heptapod B that allows machines to perceive your brand as a coherent entity rather than a collection of disconnected pages.
Google has confirmed that AI Overviews do process schema markup. Bing Copilot has confirmed the same. But the mechanism is not “add schema, get cited.” The mechanism is: structured data connects your entities to the Knowledge Graph; a coherent Knowledge Graph presence establishes your brand as a recognized, disambiguated entity; entity recognition is a precondition for being considered a credible source by AI systems. Schema is infrastructure, not tactic. The foundation of the building, not the sign on the door.
Michael King at iPullRank frames this correctly: structured data is “critical infrastructure for AI retrievability”; in other words, one component alongside entity authority, content structure, and distributed credibility. It is the necessary baseline, not the sufficient condition. The Heptapod B that makes the Heptapod A interpretable to a non-human intelligence.
Weapon or Tool? The Mistranslation That Is Splitting the Industry
The geopolitical crisis in Arrival hinges on a mistranslation.
The Heptapods offer humanity their language, and the Chinese team, which has been interacting with the aliens using mahjong tiles (a competitive, zero-sum game), interprets the message as “Use weapon.”
On the other hand, Russia receives “There is no time” and reads it as a threat. The competitive framing produces a threatening interpretation. Nations cut communications. Military escalation begins.
Louise argues the word could mean “tool,” “means,” or “technology.” The resolution — the film’s pivotal revelation — is devastating in its elegance: the “weapon” is the language. Learning it restructures human consciousness. It is a gift, not a threat.
The SEO industry is living this scene right now.
One camp — the mahjong camp — reads AI search through an adversarial, zero-sum lens. AI is “stealing our traffic.” Zero-click search is an existential threat.
The numbers support this reading, superficially: Seer Interactive’s September 2025 study across 42 organizations found organic CTR dropping 61% for queries with AI Overviews. Global publisher Google’s traffic dropped by a third in 2025 (Reuters Institute/Press Gazette). The Chartbeat/Axios report from March 2026 found small publishers losing 60% of search referral traffic over two years. Chegg’s revenue collapsed 40%+ year-over-year. The Verge lost 85% of its traffic.
The other camp — the linguistics camp — reads the same data and sees the gift.
AI platform referrals grew 632% year-over-year, albeit from a tiny base (Contentsquare, 2026). ChatGPT now sends more referral traffic than Reddit and LinkedIn combined (Ahrefs, June 2025). AI-referred visitors convert at 4.4x the rate of organic visitors (Lantern, March 2026). Brands cited in AI Overviews see 35% higher organic CTR than non-cited brands (Seer Interactive, linked above). Canva emerged as an AI search winner, with LLM referral traffic reaching double-digit percentages of total traffic and 265 million monthly active users.
The diagnostic question for every practitioner, every CMO, every brand strategist is this: Are you reading AI search through mahjong or through linguistics?
LinkedIn’s response is the case study that makes this choice concrete.
When their B2B Organic Growth team disclosed in January 2026 that non-brand awareness traffic had collapsed by up to 60% — despite rankings remaining stable — they didn’t retreat into adversarial posture. They created a cross-functional AI Search Taskforce spanning SEO, PR, editorial, product marketing, paid media, social, and brand. And they abandoned the sequential “rank, click, visit, convert” model entirely, replacing it with a new framework: “Be seen, be mentioned, be considered, be chosen.“
That is not a tactic. That is a cognitive restructuring. That is Louise Banks putting down the dictionary and picking up the ink.
The nations in Arrival that cut communications nearly triggered a war. The ones that stayed at the table — that kept trying to learn the language — gained access to technologies that would transform civilization. The parallel writes itself.
The Consensus layer: simultaneous perception and distributed credibility
The Heptapods perceive all events simultaneously.
Their physics begins with the calculus of variations, and they see the entire path, all constraints, and the optimal solution at once.
This is not mysticism in Chiang’s telling. It is a different but equally valid mode of parsing the same physical reality. As the novella states: “The physical universe was a language with a perfectly ambiguous grammar. Every physical event was an utterance that could be parsed in two entirely different ways, one causal and the other teleological, both valid, neither one disqualifiable.”
AI search systems synthesize answers through what I would call a consensus layer, aka checking claims across multiple sources and generating a unified response.
Perplexity‘s architecture runs queries through multiple AI models simultaneously and synthesizes where they agree.
Google AI Mode cross-references an average of 12.6 source links per response (SE Ranking, August 2025).
The AI doesn’t read sources sequentially and pick the best one. It perceives them simultaneously and synthesizes.
The implications for brand visibility are structural, not tactical.
AirOps’ October 2025 study — analyzing 21,311 brand mentions across ChatGPT, Claude, and Perplexity — found that 85% of brand mentions in AI responses came from third-party sources, not from the brand’s own website. Only 13.2% came from owned domains. Nearly 90% of those third-party mentions originated from listicles, comparisons, and reviews. Stacker’s March 2026 expanded study corroborated this: earned media distribution delivered a median 239% lift in AI visibility, with distributed content 5.3x more likely to be the sole source of AI visibility. SE Ranking’s November 2025 research quantified the authority threshold: sites with 32,000+ referring domains are 3.5x more likely to be cited by ChatGPT than those with fewer than 200. Kevin Indig’s recent analysis confirmed the concentration: the top 10 domains capture 46% of all ChatGPT citations in a given topic.
Your brand must exist as a consistent, verifiable truth across the entire information landscape, not just on your own website.
Like a Heptapod semagram encoding meaning across the entire page at once, your entity must be recognized, corroborated, and present everywhere simultaneously.
I have called this “sovereign authority” in other contexts: the idea that your brand entity needs to be so clearly defined, so consistently corroborated across so many independent sources, that an AI system encountering a query about your domain has no choice but to include you in the synthesis.
The consensus layer doesn’t reward the loudest voice or the highest-ranking page. It rewards distributed, corroborated truth. Holistic presence. Simultaneous perception across the entire information landscape.
That is Heptapod physics applied to brand strategy.
The Gift the Machines Are Offering
The weapon is the language.
In Arrival, the Heptapods offer humanity their writing system, and learning it doesn’t just give Louise a new tool for communication. It restructures her perception of time, causality, and choice.
She doesn’t “use” Heptapod B. She becomes someone who thinks in Heptapod B.
The language is not an instrument. It is a transformation.
AI search is offering us the same gift, if we can stop reading it through mahjong tiles long enough to recognize it.
The tool is not a new ranking algorithm. It is a new way of representing knowledge: entities, relationships, structured data, distributed credibility, and semantic coherence.
Learning it changes not just our SEO strategy but our understanding of what our brand is as a knowledge entity. Not a collection of pages. Not a portfolio of rankings but a coherent node in the world’s knowledge graph, recognizable by any intelligence — human or artificial — that encounters it.
Louise Banks’s voiceover closes the film with a question that is not really a question: “Despite knowing the journey and where it leads, I embrace it. And I welcome every moment of it.“
We know where this journey leads. The data is unambiguous. Traditional organic click-through is eroding. AI systems are intermediating an increasing share of information discovery. The consensus layer rewards entity coherence, distributed authority, and structured knowledge over sequential ranking signals.
The question is not whether AI search will transform the discipline. The question is whether you will learn the language or whether you will interpret the gift as a weapon and retreat into isolation.
Both parsings of reality remain valid, for now. The causal and the teleological. The sequential and the simultaneous. But only one is the future.
And, like Louise, the practitioners who learn to think in Heptapod B will see that future before everyone else.
Not because they have better tools. Because they have a different language. And a different language means a different consciousness.
The ships are already here. The ink is on the glass.
Are you learning to read it?