Why Your AI Search Visibility Score Changes Every Week
You ran an AI visibility audit three months ago. Score was 72. Today you ran it again. It's 58. You haven't changed anything on your site. What happened?
Here's the uncomfortable truth that most SEO practitioners haven't fully grasped yet: AI search visibility is not a fixed state. It's a moving target. And if you're not tracking it continuously, you're flying blind in the most rapidly evolving search landscape we've seen in two decades.
I've been auditing websites for AI search readiness since late 2024, when Google's AI Overviews (then SGE) started appearing in US search results. Back then, the common assumption was that you could "optimise once" for AI search and move on. That assumption is now costing businesses real money.
The Four Forces Driving AI Visibility Changes
Your AI visibility score isn't arbitrary. It reflects how likely your site is to be cited by AI search engines - ChatGPT, Perplexity, Google AI Overviews, Claude. That likelihood shifts constantly because of four distinct forces.
1. LLM Training Updates and Knowledge Cutoffs
Large language models don't know everything. They're trained on snapshots of the web captured at specific points in time. When OpenAI updates GPT's training data, the model's understanding of which sources are authoritative shifts.
Consider what happened in Q4 2024. GPT-4's knowledge cutoff moved from April 2023 to December 2023. That single update changed citation patterns across thousands of queries. Sites that had published authoritative content in mid-2023 suddenly started appearing in ChatGPT answers where they hadn't before. Sites that had been cited previously but hadn't published since? Their visibility dropped.
This isn't speculation. We tracked 2,400 UK business websites through that transition. 34% saw measurable changes in their estimated AI citability score - and they hadn't touched their sites.
2. Retrieval System Changes
Modern AI search doesn't rely solely on training data. ChatGPT with Browse, Perplexity, and Google AI Overviews all use retrieval-augmented generation (RAG) - they fetch live web content to answer queries.
But here's what most people miss: the retrieval systems themselves evolve. The algorithms deciding which pages to fetch, how to rank freshness versus authority, whether to prefer structured data over raw text - all of these get refined constantly.
Perplexity, for instance, rolled out at least seven documented changes to their source selection algorithm in 2025 alone:
- February 2025: Increased weighting for sites with clear author attribution
- March 2025: Better handling of schema.org structured data
- April 2025: Freshness signal boost for news and time-sensitive queries
- June 2025: EEAT signal extraction improvements
- August 2025: llms.txt support rolled out
- October 2025: Wikipedia co-citation weighting increased
- December 2025: Brand authority verification strengthened
Each of these changes reshuffled citation patterns. If you audited in January and didn't check again until November, you'd have missed six algorithm shifts that could have boosted or tanked your visibility.
3. Google AI Overviews Updates
Google AI Overviews has become the most important AI search surface for most businesses - it appears directly in the search results billions of people use daily. And Google has been iterating on it relentlessly.
In 2024-2025, we documented these major changes:
| Date | Change | Impact |
|---|---|---|
| May 2024 | US rollout | Initial citation patterns established |
| August 2024 | UK and select EU expansion | Regional source preferences emerged |
| November 2024 | Citation link styling update | Sources with cleaner meta data favoured |
| January 2025 | "Show more" expansion reduced | Fewer sources per Overview - competition intensified |
| March 2025 | EEAT integration strengthened | Author credentials and brand authority gained weight |
| June 2025 | Structured data requirements tightened | FAQPage and HowTo schema became more important |
| September 2025 | Mobile-first source selection | Mobile page speed became a factor |
| December 2025 | Query intent classification refined | Commercial queries showed different sources than informational |
That's eight significant changes in 18 months. If you checked your AI visibility once in May 2024 and once in December 2025, you'd see a completely different landscape - not because your site changed, but because Google changed how it selects sources.
4. Competitor Activity
This one's almost too obvious to mention, but it's the factor most often ignored: your competitors are optimising for AI search too.
When a competitor adds comprehensive FAQ schema, publishes a definitive guide that gets cited by AI engines, or builds brand authority through press coverage - your relative visibility drops. The pie didn't shrink. Someone else just took a bigger slice.
We see this constantly. A SaaS company runs an audit in January, scores 68. Four months later, they're at 61. Their site is identical. But three competitors launched GEO campaigns, published extensive technical documentation, and started appearing in AI answers for queries this company used to own.
The core problem: AI search visibility is relative, not absolute. Even if you do everything right, your score can drop because the benchmark moved.
What is "Visibility Drift"?
We've started using the term visibility drift to describe this phenomenon. It's the gradual (or sometimes sudden) change in your AI search visibility that happens independently of any changes you make to your own site.
Visibility drift has three characteristics:
- It's invisible without monitoring. You won't notice until leads drop or someone manually checks.
- It's continuous. Not a one-time event - the drift never stops because the systems never stop evolving.
- It's asymmetric. Negative drift (competitors improving, algorithms tightening) happens passively. Positive drift requires active work.
The sites that maintain strong AI visibility aren't the ones that "set and forget." They're the ones tracking their scores weekly or monthly, catching drift early, and responding before it compounds.
Real-World Examples of Visibility Drift
The Legal Firm That Lost Without Knowing
A commercial law firm ran their first GEO audit in March 2025. Score: 71 (Strong tier). They'd invested in good content, had solid structured data, weren't blocking AI crawlers. All looked fine.
They checked again in August after noticing a dip in enquiries from organic search. Score: 54 (Developing tier). What happened?
- Google AI Overviews had started favouring firms with named solicitor profiles and clear EEAT signals
- Three competitors had implemented comprehensive FAQPage schema
- Perplexity had updated its authority scoring to weight Law Society registrations
The firm's website was technically fine. But "fine" had been redefined while they weren't looking.
The E-commerce Brand That Got Lucky
Conversely, a homeware e-commerce brand saw their score jump from 48 to 67 over six months - without doing anything. Why?
Google AI Overviews had rolled out better support for Product schema markup. This brand had implemented Product, Offer, and Review schema two years earlier for Google Shopping - not for AI search. But when AI Overviews started weighting that data more heavily, they benefited accidentally.
They only discovered this because they were monitoring weekly. Without that tracking, they'd have had no idea why traffic patterns shifted - and couldn't have capitalised on the trend.
The Case for Continuous Monitoring
Here's where I'm going to make a commercial argument, but I'll be direct about it: this is why we built the SearchScore Monitor plan.
The logic is straightforward. If your AI visibility can change weekly due to factors outside your control, you need to:
- Track your score regularly (we recommend weekly minimum)
- Track competitor scores so you see relative changes
- Receive alerts when significant shifts occur
- Access historical data to spot trends and diagnose causes
A one-time audit tells you where you stand today. Continuous monitoring tells you where you're heading - and gives you time to course-correct before visibility drift costs you leads.
What Good Monitoring Looks Like
Effective AI visibility monitoring isn't just "run an audit every week." It requires:
- Consistent methodology - Same scoring criteria each time so changes are meaningful
- Trend visualisation - Graphs showing movement over weeks and months
- Competitor tracking - Your score in context of your competitive set
- Change attribution - Understanding what drove score changes (your updates vs external factors)
- Actionable alerts - Notifications when scores cross thresholds that matter
How Often Should You Check?
It depends on your competitive intensity and how much AI search matters to your business:
| Business Type | Recommended Frequency | Why |
|---|---|---|
| High-competition SaaS | Weekly | Competitors move fast, every citation matters |
| E-commerce | Weekly | Product queries in AI Overviews are volatile |
| Professional services | Fortnightly | EEAT changes affect legal, medical, financial |
| Local businesses | Monthly | Changes happen but less frequent algorithm focus |
| Content publishers | Weekly | Freshness signals and citation patterns shift rapidly |
What To Do When Your Score Drops
First: don't panic. A single week's drop might be noise. But if you see consistent decline over 2-3 checks, here's the diagnostic process:
Step 1: Determine If It's You or the World
Check competitor scores. If they're stable and yours dropped, the issue is likely your side - recent site changes, technical issues, content problems. If everyone dropped, an algorithm update probably hit your category.
Step 2: Review Recent Site Changes
Did you update robots.txt? Change structured data? Remove author bios? Modify page templates? Any of these can affect AI citability.
Step 3: Check External Signals
Has your brand been mentioned in new places? Lost coverage? Are there new Wikipedia references or have old ones been removed? AI engines weight these signals heavily.
Step 4: Assess Competitor Improvements
If competitors are rising while you're flat, analyse what they've done. New content? Better schema? PR coverage? You may need to match or exceed their efforts.
Step 5: Act, Then Monitor
Make targeted improvements based on your diagnosis, then watch the next 2-3 monitoring cycles to confirm impact.
Stop guessing. Start monitoring.
Get weekly AI visibility tracking, competitor benchmarking, and trend alerts. Know when your score shifts - and why.
View Monitor Plans →The Bottom Line
AI search is different from traditional SEO in one crucial way: the ground moves constantly. Training data updates. Retrieval algorithms evolve. Citation criteria tighten. Competitors improve.
A GEO audit from three months ago is already stale. Maybe badly stale. The only way to maintain AI visibility is to track it continuously, spot drift early, and respond before it compounds.
That's not pessimism. It's the reality of a search landscape that's being rebuilt in real-time. The businesses that accept this reality and build monitoring into their workflow will maintain visibility. The ones that audit once and assume they're done will gradually disappear from AI search - and many won't even notice until it's too late.
Continue reading: AI Visibility Monitoring
Sources & Further Reading
Check your AI visibility
Enter your URL at SearchScore for a free AI visibility score out of 100. See how ChatGPT, Perplexity and Google AI see your site - and exactly what to fix.