2025-12-06
The internet is undergoing a paradigm shift. Where once businesses focused on ranking pages in search results, today’s users are increasingly relying on AI-powered agents for direct answers and recommendations. In this “agentic web” era, brands must become trusted, machine-readable sources for AI, not just chase high search rankings.
This shift demands a new strategy. Today’s consumers often turn first to chatbots and AI assistants. These tools synthesize information across sources and present it conversationally, often answering questions right on the spot. A smartphone query might now yield an AI response listing top results, images, and even phone numbers directly, so users rarely click onward. In many cases, AI-driven referrals convert better: answers from AI tend to pre-qualify users, so the traffic they do send is higher-intent.
Put simply, visibility in the agentic web means becoming that trusted source AI wants to cite. In this article, we’ll unpack this new landscape and its battles – from legal fights like Amazon vs. Perplexity, to content owners suing AI platforms, to the emergence of a new discipline called Generative Engine Optimization (GEO). Along the way, we’ll show how businesses can adapt: optimizing for E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness), structured data, and real-world proof points to earn AI citations, not just clicks.
For additional context on foundational SEO, see our guide to SEO strategies for Edmonton websites.

The old SEO playbook — keyword-stuffing pages and chasing rankings — is quickly becoming obsolete. Today’s search models act more like answer engines. Google, Bing, and specialized AI platforms now provide direct answers, not just a list of links.
When these AI summaries appear, users often don’t click through. The upshot: brands can no longer rely on high page rank alone to drive traffic. Instead, visibility means being cited inside AI responses. In other words, “presence in the answer” is replacing “position on the page.”
A growing number of queries now end on the results page. AI chatbots and summary panels satisfy the query on the spot. Google’s AI Overviews, for example, can take up a large portion of the screen, especially on mobile, pushing traditional listings below the fold.
Practically, this means:
User queries are also becoming longer and more conversational. Instead of terse keywords like:
best patio heater
people now ask full, contextual questions like:
What’s the best patio heater to keep me warm on a chilly Edmonton evening, that’s safe for kids and under $500?
AI assistants maintain conversational context, letting users refine answers with follow-ups. They’re not just matching keywords; they’re parsing intent, constraints, and preferences over multiple turns.
SEO has to adapt:
This dramatic change in search mechanics has already prompted a legal and economic battleground, which brings us to one of the most important case studies in the agentic era: Amazon vs. Perplexity.

One of the most telling skirmishes in this transition is Amazon’s recent fight with AI startup Perplexity. Perplexity offers shopping agents (like their Comet browser assistant) that can programmatically find the best products across the web for users. This directly challenges Amazon’s lucrative retail media business, which depends on influencing human shoppers with sponsored placements and upsells.
At its core, this conflict is ad revenue vs. user agency.
Amazon alleges that Perplexity’s shopping agent violates its Terms of Service and even the US Computer Fraud and Abuse Act (CFAA). Their argument is that:
From Amazon’s perspective, Perplexity’s assistant is committing “computer fraud” because it programmatically accesses Amazon as if it were a human shopper, bypassing Amazon’s rules and threatening a multi-billion-dollar ad ecosystem.
Perplexity counters that its agent isn’t a rogue bot at all, but rather a user’s assistant:
Perplexity has described Amazon’s move as anti-competitive bullying meant to protect its ad business, pointing out the irony that Amazon is building its own shopping bot (“Buy for Me”) at the same time. In that framing, the issue isn’t user safety — it’s maintaining control of the walled garden.
At stake is a big, precedent-setting question:
Is an AI agent legally part of the user, or is it just another “bot”?
This isn’t just a fight between two companies. It’s a battle over who gets to mediate user intent in the age of AI — the platforms, or the agents users choose.
Amazon isn’t the only one pushing back. Major publishers — including the New York Times, Wall Street Journal, and others — have filed lawsuits against AI platforms like OpenAI and Perplexity. The core complaints:
AI companies argue they’re citing facts and transforming content, not simply copying it. But the legal outcomes will have huge implications for how AI agents can use and summarize proprietary content — and how businesses should think about their own content in an agentic world.
Perplexity AI embodies both the promise and peril of agentic search.
Perplexity’s “Answer Engine” is genuinely compelling from a user standpoint:
It feels less like “searching” and more like getting a briefing from a well-read assistant that can show its work.
But behind the scenes, Perplexity’s methods have raised serious concerns.
Security researchers and infrastructure providers have reported that Perplexity’s crawlers:
robots.txt directives, even when sites explicitly disallow crawling.Perplexity’s own documentation has at times acknowledged that its user-facing fetch agent “generally ignores robots.txt,” which many see as an open rejection of one of the web’s longstanding trust mechanisms.
This behavior breaks the informal social contract of the web and forces organizations into a more expensive, complex arms race of active defense to protect their content.
Agentic browsing also introduces a new class of attack: prompt injection.
Prompt injection happens when malicious instructions are hidden inside a page’s HTML — in invisible text, comments, or attributes. A human would never see them, but an AI agent parsing the raw HTML might treat them as legitimate instructions.
Examples of what an injected prompt might silently tell an agent:
To counter this, Perplexity has developed BrowseSafe, an open-source detection model that:
The key lesson: in the agentic web, cybersecurity shifts from the network perimeter to the AI’s input filter. Defending the agent’s “mind” — what it reads and trusts — becomes just as important as defending your servers.
All of this leads to a new discipline: Generative Engine Optimization (GEO).
GEO is the practice of structuring and signaling your content so that AI agents (not just search engines) recognize it as credible, cite it in answers, and recommend your brand. It evolves classic SEO, but with a sharper focus on:
E-E-A-T is the single most critical concept in the age of AI search. It originated in Google’s Search Quality Rater Guidelines and has become a shorthand for how both humans and machines judge the quality of a source.
Practical ways to strengthen E-E-A-T:
AI systems are more likely to cite sources that look like they come from real experts doing real work — not anonymous content farms. For a deeper dive into how we think about this, see our piece on the rise of AI agencies.
AI models don’t read pages like humans; they parse the structure.
To make your content machine-friendly:
Implement Structured Data (Schema.org)
Use JSON-LD schema for entities like LocalBusiness, FAQ, Article, Product, and Person. This explicitly tells AI what the content is about, not just what it says.
Use clear, scannable formatting
Organize content with:
This helps AI quickly extract key facts and recommendations.
Create Question-and-Answer content
Build FAQ pages, blog posts, and resource hubs that directly answer the kinds of questions users ask AI tools. Phrasing headings and sections as questions (“How do I…?”, “What is the best…?”) aligns your content with how prompts are written.
At Agency7, our digital strategy services approach this as part of a broader AI-ready content architecture: making sure your site is easy to understand for both humans and machines.
For AI, what others say about you is often more important than what you say about yourself. You need a strong footprint beyond your own domain.
Key pillars:
NAP Consistency
Ensure your Name, Address, and Phone Number (NAP) are identical across your Google Business Profile, Yelp, industry directories, and local listings. This consistency helps AI confirm your identity and location — especially for local discovery.
Authoritative Mentions
Aim to be referenced in:
AI models pull from a wide range of sources. When your brand shows up repeatedly in trustworthy contexts, it signals authority.
Customer Reviews
Encourage and manage reviews on platforms like Google, Yelp, and industry-specific review sites. AI models parse not just star ratings but sentiment and detail in review text to assess quality.
Our work with medium to large organizations typically includes auditing this trust footprint as part of a broader digital strategy and SEO/AI roadmap.
The internet is evolving from a web of pages to a web of agents.
The old SEO metrics — clicks, ranks, impressions — are being overshadowed by citations, credibility, and trust. Legal battles like Amazon vs. Perplexity and publishers vs. AI platforms aren’t just tech drama; they’re the early power struggles setting the rules of this new landscape.
What’s already clear is this:
In the agentic web, visibility equals trust.
The brands that win will be those that:
The agentic web is here. The question isn’t whether AI will mediate your customer’s journey — it’s whether your brand will be part of the answer.
If you’re ready to future-proof your digital strategy and make your brand AI-ready, explore more of our insights or get in touch with our team via the contact page to start your GEO and agentic web strategy.