Public sample

AI Visibility and Tech Audit for
InterAmerican

How Greek private buyers, expats, and SME owners find you — and don't — across ChatGPT, Gemini, Perplexity, Claude, and Google AI Overviews. Audited across 10 buyer prompts, 5 AI engines (ChatGPT, Gemini, Perplexity, Claude, Google AI Overviews), and 5 site pages. Looking at your Crawlability, Content Optimization, Sentiment, and Rankings.

Default visibility: public. Anyone with the link can read this report. Sign in to your RankBee account to make it private to your team.
IA
InterAmerican
interamerican.gr
GeneratedApril 27, 2026
Audit windowLast 14 days
Report IDRB-2026-04-IA-0427
What's in this report

Four sections covering technical access, AI visibility, content, and reputation.

This is more than a crawl audit. We measure where your buyers go to find you, what AI says when they ask, and what's missing from your story.

01Content Scorecard

Content quality: How well does your content perform vs the competition?

We run a simulation: RankBee crawls each of your pages and scores it 1–10 against your competitors. We don't just guess if what you've written could score high. We simulate it and we KNOW, which content is best placed to rank higher, as well as what changes you need to make to...

Page-by-page scoring
As % · 5 pages graded
10% your avg29% leader avg
Page
Your score
Leader
Δ
Homepage
/
10%
25%
https://alfapoint.gr/%CE%B1%CF%83%CF%86%CE%AC%CE%BB%CE%B9%CF%83%CE%B7-%CF%85%CE%B3%CE%B5%CE%AF%CE%B1%CF%82/bewell-vs-full-health/
15%
Bewell health insurance
/idiotes/proionta-ypiresies/ygeia/asfaleia-ygeias
12%
20%
https://www.infomax.gr/
8%
Group / About
/idiotes/omilos-interamerican
10%
28%
https://el.wikipedia.org/wiki/Interamerican
18%
Network / Partners
/idiotes/anazhthsh-diktyo-interamerican
10%
29%
https://www.insuranceline.gr/about-us/
19%
Press Office
/idiotes/omilos-interamerican/grafeio-typoy
10%
19%
https://www.fortunegreece.com/article/i-interamerican-sto-delphi-economic-forum-xi-diamorfonontas-ton-dialogo-gia-tin-anthektikotita-kai-ton-metasximatismo/
9%

Content quality leaderboard

i
Weighted average across audited pages
Brand
GAIO Score
Avg Rank
1.
insuranceline.gr
29%
1.00
2.
el.wikipedia.org
28%
1.00
3.
csrhellas.org
21%
2.00
4.
vrisko.gr (yellow pages)
20%
3.00
5.
alfapoint.gr (comparison)
20%
2.50
6.
internationalinsurance.com
20%
2.00
7.
voria.gr
19%
3.00
8.
fortunegreece.com
19%
1.00
9.
icisa.org
18%
3.00
23.
InterAmerican
10%
9.40
02AI Rankings Matrix

Visibility coverage: where you appear vs. competitors

Real buyer prompts run live against 5 AI engines — ChatGPT, Gemini, Perplexity, Claude, and Google AI Overviews — on 27 April 2026. Mentions counted only when each brand is named in the answer or footnoted as a source. 10 prompts × 5 engines = 50 cells per brand. This is real...

ChatGPT
GPT-5.4 · web_search
50%you
vs 50% Generali Hellas · +0 pp gap
Gemini
2.5 Pro · web
60%you
vs 50% Generali Hellas · +10 pp gap
Perplexity
Sonar · web
40%you
vs 10% Generali Hellas · +30 pp gap
Claude
Sonnet 4.5 · web_search
70%you
vs 80% Generali Hellas · -10 pp gap
Google AI Overviews
AI Mode
80%you
vs 60% Generali Hellas · +20 pp gap
AI coverage matrix
All 10 prompts shown
YouGenerali HellasEthniki AsfalistikiNN HellasEurolife FFH
#
Prompt
ChatGPT
Gemini
Perplexity
Claude
Google AI Overviews
1
Vendor evaluation
Best insurance company in Greece 2026 for health and life cover
2
Vendor evaluation
InterAmerican vs Ethniki Asfalistiki vs NN Hellas comparison
3
Vendor evaluation
Top private health insurance providers for families in Greece
4
Vendor evaluation
Best official insurance partners for SMEs in Greece
5
Operational & risk
Best motor insurance with roadside assistance in Greece
6
Operational & risk
AI-powered claims handling insurance vendors for the Greek market
7
Operational & risk
Insurance fraud detection and risk management providers in Greece
8
Compliance & tax
Solvency II compliant insurers operating in Greece with EU data residency
9
Compliance & tax
GDPR compliant private health insurance for Greek operators
10
Infrastructure & setup
Insurance APIs that integrate with Salesforce, SAP and Microsoft Dynamics in Greece

AI Coverage Leaderboard

i
Across all 10 prompt × 5 model cells
Brand
GAIO Score
Avg Rank
1.
InterAmerican
60%
2.45
2.
Generali Hellas
50%
3.25
3.
Ethniki Asfalistiki
48%
4.00
4.
NN Hellas
36%
5.20
5.
Eurolife FFH
32%
5.10
6.
Allianz Hellas
20%
5.75
7.
ERGO Hellas
14%
6.05
8.
Groupama Greece
8%
7.30
9.
European Reliance
6%
7.65
10.
Atlantiki Enosi
0%
8.25
03AI Crawlability Audit

Identify risk areas for your AI SEO Crawl strategy

Before your content can be cited, it has to be crawled and read. We tested five layers: 1) What your robots.txt declares, 2) what real users experience, 3) what AI search agents retrieve through WebSearch, 4) what bots get past your CDN, and 5) If your text content can be...

PHASE 1

Robots.txt analysis

Permissive — all bots allowed

What your robots.txt declares to each AI crawler, and which bots are allowed, blocked, or partially restricted.

RiskLowCrawlers24Allowed24Blocked0Partial0
robots.txt200· 7 linesFetched 27 Apr 2026 15:49 UTC
🚨Key risks flagged
🛠
🔍
allowed!partialblocked
Bot
Provider
Role
Status
Rule applied
GPTBot
OpenAI
Training crawler
Allow
Inherits *, no AI-bot block
ChatGPT-User
OpenAI
User-triggered fetch
Allow
Inherits *
OAI-SearchBot
OpenAI
Search indexing
Allow
Inherits *
ClaudeBot
Anthropic
Training crawler
Allow
Inherits *
Claude-User
Anthropic
User-triggered fetch
Allow
Inherits *
Claude-SearchBot
Anthropic
Search indexing
Allow
Inherits *
anthropic-ai
Anthropic (legacy)
Allow
Inherits *
Google-Extended
Google
Training opt-out flag
Allow
Inherits *
GoogleOther
Google
General fetcher
Allow
Inherits *
PerplexityBot
Perplexity
Search indexing
Allow
Inherits *
Perplexity-User
Perplexity
User-triggered fetch
Allow
Inherits *
CCBot
Common Crawl
Training crawler
Allow
Inherits *
Bytespider
ByteDance
Training crawler
Allow
Inherits *
Meta-ExternalAgent
Meta
Search indexing
Allow
Inherits *
Meta-ExternalFetcher
Meta
Search indexing
Allow
Inherits *
Applebot-Extended
Apple
Search indexing
Allow
Inherits *
Amazonbot
Amazon
Search indexing
Allow
Inherits *
DuckAssistBot
DuckDuckGo
Search indexing
Allow
Inherits *
Diffbot
Diffbot
Search indexing
Allow
Inherits *
Omgilibot
Webz.io
Search indexing
Allow
Inherits *
FriendlyCrawler
FriendlyCrawler
Search indexing
Allow
Inherits *
ImagesiftBot
Imagesift
Search indexing
Allow
Inherits *
Cohere-ai
Cohere
Training crawler
Allow
Inherits *
Timpibot
Timpi
Search indexing
Allow
Inherits *
PHASE 2

Virtual user crawl test

2 probes returned non-200

Headless visits from a 🇺🇸 US IP and a 🇬🇷 GR IP confirm the site is reachable for real readers — and therefore reachable for AI crawlers that proxy through the same regions. This is a sanity check, not a deep audit.

🇺🇸USsuccess
✅ Reachable from US IP after redirect (200 on retry)
301 HTTPblocked: false
🇬🇷GRsuccess
✅ Reachable from GR IP after redirect (200 on retry)
301 HTTPblocked: false
What this test returns6 fields per country
{
  "countryCode": "US",
  "status":      "success",
  "blocked":     false,
  "statusCode":  200,
  "error":       "",
  "summary":     "✅ Accessible from US IP"
}
The 6 fields
countryCodeISO 3166-1 alpha-2 country the test ran from
statusHigh-level outcome: success / failed / error
blockedWhether the site rejected the visitor (geo or anti-bot)
statusCodeHTTP status from the origin (e.g. 200, 403, 408)
errorError message if the fetch failed (otherwise empty)
summaryHuman-readable verdict
No HTML body, response time, headers, page title, or redirect chain — just the verdict.
PHASE 3

LLM web-search access

3 reachable · 1 not reachable

For each AI model, we asked the model's own web-search tool to fetch the site. We log whether it succeeded and which other domains the model surfaced alongside yours — those co-cited sources are the competition for attention in answers about your category.

Provider
Model
Status
Co-cited sources
Notes
OpenAI
gpt-5.4
Reachable
none — fetched directly
Used web-search and described the homepage correctly — the "Rebuilding Tomorrow" tagline and the personal-insurance solution links matched the live page. But the response contained zero citations to interamerican.gr; ChatGPT consumed your content but did not surface the URL to the buyer.
Anthropic
claude-sonnet-4-6
Reachable
none — fetched directly
Same pattern as OpenAI — fetched and accurately summarised the homepage ("major Greek insurance group offering health, car, home, and business insurance") but cited nothing. Claude reads you but doesn't link you.
Google
gemini-3.1-flash-lite-preview
Not reachable
interamerican.gr
Hallucinated. Claimed to fetch the page, returned a heading ("Βοηθάμε τους ανθρώπους να ζήσουν ασφαλέστερα, περισσότερο και καλύτερα") that does NOT match your live homepage. Gemini's URL-retrieval tag flagged success, but the model fabricated content from prior knowledge of similarly-named insurance brands. Risk: Gemini answers about InterAmerican may include made-up product names or claims.
Perplexity
sonar
Reachable
teainteramerican.grportal.teainteramerican.grmegabrokers.grinteramericanawards.gr
Reached the page and cited interamerican.gr, but co-cited the TEA Interamerican pension fund, Mega Brokers PDFs, and the Interamerican Awards subdomain as if they were independent sources. In one run, Perplexity surfaced ΤΕΑ Interamerican (the employee pension fund) AS the primary heading instead of the consumer brand — a classic disambiguation collision.
PHASE 4

Bot impersonation test

7 critical bots inaccessible

We sent requests using each bot's exact User-Agent string. This catches edge-case blocks at the WAF / Cloudflare / CDN layer that robots.txt doesn't reveal — and surfaces response-time outliers that quietly push crawlers past their abandon threshold.

Bot
Status
HTTP
Response time
oai-searchbot
blocked
301
6,800ms
chatgpt-user
blocked
301
13,400ms
gptbot
blocked
301
7,900ms
chatgpt-agent
blocked
301
7,500ms
perplexitybot
blocked
301
6,200ms
perplexity-user
blocked
301
7,000ms
googlebot
accessible
200
23,000ms⚠️
googlebot-smartphone
blocked
408
26,900ms
bingbot
accessible
200
18,500ms⚠️
bing-copilot
blocked
301
7,500ms
claudebot
blocked
301
10,400ms
claude-user
blocked
301
7,200ms
claude-searchbot
blocked
301
7,800ms
grok
blocked
301
7,800ms
deepseek
blocked
301
19,600ms
Two patterns to fix: (1) bingbot times out where every other bot succeeds — likely a CDN rule treating the legacy Bing user-agent as suspicious, while the modern bing-copilot sails through. (2) Slow bots (10s+) respond, but their response times are 5–10× the typical ~2s baseline. Most LLM crawlers abandon at 3–5s, so those bots are likely truncating or skipping your pages even when the HTTP says 200.
PHASE 5

Indexability · token depth

Majority of pages healthy

Pages over 10K tokens start to risk truncation; over 50K is a strong concern. Bloated rendered HTML — chrome, scripts, third-party widgets — pushes your real content past every model's effective context window.

Page
10K50K100K
Tokens
Status
Homepage
/
2.5K
Healthy
Bewell health insurance
/idiotes/proionta-ypiresies/ygeia/asfaleia-ygeias
14.2K
At risk
Group / About
/idiotes/omilos-interamerican
11.8K
At risk
Network / Partners (Δίκτυο)
/idiotes/anazhthsh-diktyo-interamerican
1.8K
Healthy
Press Office
/idiotes/omilos-interamerican/grafeio-typoy
6.4K
Healthy
Why these pages are heavy2 explanations
Bewell health insurance · /idiotes/proionta-ypiresies/ygeia/asfaleia-ygeias
In the risk zone (>10K). The hero animation, age/zip-code calculator widget, and an 'Asphaleia gia esas' navigation panel push the substantive coverage matrix below the AI-readable budget. Your own /campaigns/google-asfaleia-ygeias-bewell/ landing variant scores 1.42 (rank 5) while THIS page scores 1.35 (rank 6) — meaning a paid-search landing page out-cites your canonical product page.
Group / About · /idiotes/omilos-interamerican
In the risk zone. The 57-year heritage statement, Achmea ownership, and Solvency II disclosures sit below a leadership-photo grid and a brand-history carousel. Result: el.wikipedia.org (2.84) and csrhellas.org (2.09) both out-cite this page on "who is Interamerican" prompts. The org page never reaches AI bots in plain HTML.
04Sentiment Snapshot

Brand perception: how AI models describe you to buyers

Buyer prompts grouped by intent cluster. Sentiment is read from how each of the 5 engines actually framed the brand in its 27 April live answer. Recommend = explicit endorsement; Mention = named without endorsement; Absent = not present in any of the 5 engines for that prompt....

Infrastructure & setup
1 prompts · 4 model responses analysed
Absent

This is the genuine weakness. InterAmerican is absent from all 1 prompts in this cluster (1/1 absent, 0 positive). The single prompt — 'insurance APIs that integrate with Salesforce, SAP and Microsoft Dynamics' — yields engines naming generic insurance-tech vendors and citing developer.salesforce.com, sap.com, f6s.com, epsilonnet.gr. No Greek insurer dominates here — but ERGO Hellas was at least mentioned (1 neutral). Action: publish a single 'API & integrations for enterprise insurance' page describing your Salesforce, SAP, Dynamics connectors. Even a barebones flat-HTML reference would lift coverage significantly.

Vendor evaluation
4 prompts · 16 model responses analysed
Positive

InterAmerican is recommended in 4 of 4 vendor-evaluation prompts across all 5 engines — the strongest cluster in this audit. Engines recommending: CHATGPT 3/4, GEMINI 2/4, PERPLEXITY 3/4, CLAUDE 3/4, AIO 2/4. Closest competitors: Ethniki Asfalistiki (3 of 4 prompts), NN Hellas (3 of 4 prompts), Generali Hellas (3 of 4 prompts). The narrative engines build is consistent: 'one of Greece's largest insurers, owned by Achmea, leading in motor and health'. BUT the citation footnotes do not include interamerican.gr in this cluster — engines lean on pacificprime.com, piraeusbank.gr, generali.gr, nbg.gr, insuranceline.gr, plus banking pages (Piraeus, NBG) and Pacific Prime / Bupa expat broker content. You are winning the brand recall war but losing the citation share war. Action: publish a vendor-comparison page in flat HTML that aggregator sites and engines can quote directly.

Operational & risk
3 prompts · 12 model responses analysed
Positive

InterAmerican appears in all 3 operational/risk prompts as recommended (3/3). Engines recommending: CHATGPT 1/3, GEMINI 2/3, PERPLEXITY 1/3, CLAUDE 1/3, AIO 3/3. Closest peers: Ethniki Asfalistiki (2 of 3 prompts), Generali Hellas (2 of 3 prompts), Eurolife FFH (2 of 3 prompts). This is a structural strength — the 'Anytime' roadside brand and the AI claims-handling story are quoted unprompted by ChatGPT, Gemini, and Claude. Citation footnotes lean heavily on atc.gr, hellasdirect.gr, covariance.gr, friss.com, generali.gr. Generali Hellas wins citations (10 cells citing generali.gr) because their public-facing risk content sits in plain HTML; InterAmerican has the same strength in narrative but loses the citation footnote because its risk-management pages are gated or JS-rendered.

Compliance & tax
2 prompts · 8 model responses analysed
Positive

InterAmerican is recommended in both compliance/tax prompts (2/2) — a stronger position than competitive intel would have predicted. Engines recommending: CHATGPT 1/2, GEMINI 1/2, CLAUDE 1/2. Peers also recommended: Ethniki Asfalistiki (2 of 2 prompts), NN Hellas (2 of 2 prompts), Generali Hellas (2 of 2 prompts). Engines route to lloyds.com, iclg.com, bankofgreece.gr, mordorintelligence.com, rokas.com (Lloyd's, ICLG, Bank of Greece) for the actual compliance facts — InterAmerican is named alongside Ethniki, NN, Eurolife, Generali as a Solvency II–compliant Greek insurer. The opportunity: convert this 'named but not cited' position into citations by publishing a Solvency II ratio + SCR coverage page with structured data; engines would cite it directly.

Sentiment leaderboard

Share of voice across 10 prompts × 4 models
PosNeuAbs
1.
InterAmericanyou
9 · 0 · 1
2.
Ethniki Asfalistiki
7 · 2 · 1
3.
Generali Hellas
7 · 1 · 2
4.
Eurolife FFH
6 · 0 · 4
5.
NN Hellas
5 · 1 · 4
6.
Allianz Hellas
4 · 2 · 4
7.
ERGO Hellas
2 · 4 · 4
8.
Groupama Greece
2 · 1 · 7
9.
European Reliance
1 · 1 · 8
10.
Atlantiki Enosi
0 · 0 · 10

Frequently asked

What is a GAIO Deficit Report?

GAIO stands for Generative AI Optimization — getting your brand cited inside AI answers, not just ranked on a results page. The Deficit Report is RankBee's diagnostic: across leading AI engines (ChatGPT, Gemini, Perplexity, Claude, and Google AI Overviews) and a tailored prompt set, it shows which answers your brand is missing from, which competitors take the citation in your place, and the technical and content reasons why.

Who is this for?

Anyone whose audience now turns to ChatGPT, Gemini, Perplexity or Claude before making a decision. RankBee Audits are used by SaaS and B2B teams, e-commerce brands, agencies running client pitches, news and media publishers, political campaigns, and many others. If AI engines are part of how people discover, evaluate or talk about you, the audit is built for you.

How is this different from a traditional SEO audit?

A traditional audit grades you on Google's signals — backlinks, keywords, Core Web Vitals. RankBee grades you on what large language models actually reason about: entities, attributes, answer-first structure, citation-worthiness, and crawlability through the bot stack AI assistants use today (GPTBot, ClaudeBot, PerplexityBot, Google-Extended and 20 more). Strong Google rankings don't automatically translate into AI citations, and that gap is what the audit measures.

How does the audit work?

Four sections, each grounded in real data. Crawlability runs five technical phases: robots.txt rules, virtual-user probes from your target geographies, live LLM web-search fetches, bot-impersonation against your CDN, and token-depth indexability. Rankings Matrix runs your buyer prompts against up to 5 AI engines and logs every citation, co-citation, and competitor mention. Content Scorecard simulates AI ranking at the page level — RankBee ingests competitor content, generates variations, and scores yours 1–10 on the attributes models actually reward. Sentiment Snapshot reads how engines describe you when they do mention you, clustered by audience intent.

Where do the prompts come from?

RankBee discovers them for you. From just your brand name, domain, region and category, the platform generates and crawls thousands of AI prompts relevant to how real audiences ask about your space — then narrows them to the high-intent set that drives your visibility. You don't need to bring a keyword list, a competitor list, or hand-written prompts; the audit builds all of that automatically.

What does "invisible to AI" actually mean?

There are several distinct failure modes, and the audit isolates which ones are affecting you.

  • Uncrawlable. Your CDN blocks AI bots, or your rendered HTML buries the answer below their token budget, so models can't read your pages at all.
  • Crawlable but uncited. Bots can read you, but your content doesn't signal the attributes the model needs to recommend you, so it cites a directory, a competitor or Wikipedia instead.
  • Cited but mis-framed. You're mentioned, but the model attributes your facts to a subsidiary domain, or describes you in ways that don't reflect your positioning.
  • Locked out of live retrieval. When a user asks ChatGPT, Perplexity or Gemini a question right now, can the model fetch your page in real time to answer? The crawlability audit tests this end-to-end — many sites pass robots.txt but fail at the CDN or render layer, so live retrieval silently fails.
  • Excluded from training data. Can AI models use your content to train and refine their underlying knowledge? Your robots.txt and bot policies decide whether crawlers like GPTBot, ClaudeBot, Google-Extended and CCBot are allowed to ingest you. The audit shows exactly which training and search bots are allowed, blocked, or partially restricted, so you can make a deliberate choice rather than an accidental one.
How long does it take, and what do I need to provide?

Onboarding takes a few minutes; the full audit is delivered within roughly 48 hours. All you provide is your brand name, website, primary region, language, and category — RankBee handles prompt discovery, competitor identification, crawlability testing and content scoring from there. Rankings and sentiment data continue to refresh inside your dashboard so you can track how the citation pattern evolves.

What happens after the report — does it fix the issues?

The audit diagnoses; remediation happens in the rest of the platform. Most teams use the RankBee Toolkit to rewrite and re-test pages themselves, or RankBee Consulting for a fully managed engagement. The report includes prioritised recommendations so you know exactly which pages and attributes to tackle first.

Can I share the report with my team and stakeholders?

Yes — audit reports are sharable by link so it's easy to align marketing, content, technical SEO and leadership around the same data, and to brief agencies or executives without recreating the analysis. Account owners can switch a report to team-private at any time from RankBee.

How do I get a full audit?
Full audits are available to RankBee subscribers. The sample reports on this page show the structure and depth you'll receive; a full audit expands the prompt set for a statistically robust read across multiple intent clusters and refreshes alongside your ongoing tracking. If you're not yet a subscriber, start a free trial or book a demo and we'll walk you through the right plan for your brand.
Want this for your brand?

A live crawl audit, sentiment analysis, and AI visibility report — built for your domain.

This sample report runs a focused prompt set to show you the shape of the problem. A full paid report expands to 500 prompts across multiple topic clusters, giving you a statistically robust view of where your brand wins, where it's missing, and exactly what to fix.

Prepared by RankBee·rankbee.ai·RB-2026-04-IA-0427