Public sample

AI Visibility and Tech Audit for
Isle of Man

How iGaming, sportsbook, and crypto-casino founders evaluating where to get licensed find the Isle of Man across ChatGPT, Google AI Overviews, Perplexity, Gemini, and Claude. Audited across 10 buyer prompts, 5 AI engines, and 5 site pages. Looking at your Crawlability, Content Optimization, Sentiment, and Rankings.

Default visibility: public. Anyone with the link can read this report. Sign in to your RankBee account to make it private to your team.
IM
Isle of Man
digitalisleofman.com
GeneratedApril 27, 2026
Audit windowLast 14 days
Report IDRB-2026-04-IOM-0427
What's in this report

Four sections covering technical access, AI visibility, content, and reputation.

This is more than a crawl audit. We measure where your buyers go to find you, what AI says when they ask, and what's missing from your story.

01Content Scorecard

Content quality: How well does your content perform vs the competition?

We run a simulation: RankBee crawls each of your pages and scores it 1–10 against your competitors. We don't just guess if what you've written could score high. We simulate it and we KNOW, which content is best placed to rank higher, as well as what changes you need to make to...

Page-by-page scoring
As % · 5 pages graded
14% your avg27% leader avg
Page
Your score
Leader
Δ
iGaming Homepage
/igaming/
11%
25%
https://www.smartico.ai/blog-post/best-countries-jurisdictions-online-gambling-license-acquisition-process
14%
About Digital Isle of Man
/about-us/
17%
21%
https://im.linkedin.com/company/digital-isle-of-man
4%
iGaming Licences
/igaming/igaming-licences/
10%
20%
https://gofaizen-sherle.com/gambling-license/isle-of-man
10%
Why Regulatory Credibility (licensee editorial)
/news/why-regulatory-credibility-gives-the-isle-of-man-its-competitive-edge-in-2026/
24%
27%
https://legarithm.io/blog/anjouan-vs-malta-vs-isle-of-man/
3%
Newsroom Index
/news/
10%
16%
https://focusgn.com/isle-of-man-regulator-lifts-gambling-money-laundering-risk-level
6%

Content quality leaderboard

i
Weighted average across audited pages
Brand
GAIO Score
Avg Rank
1.
legarithm.io
27%
1.00
2.
igamingbusiness.com
20%
2.00
3.
smartico.ai
19%
4.00
4.
gofaizen-sherle.com
19%
2.00
5.
applebyglobal.com
18%
2.50
6.
gaminglicenserequirements.net
18%
2.00
7.
gbo-licensing.com
17%
2.50
8.
im.linkedin.com
15%
1.00
9.
altenar.com
14%
3.00
10.
digitalisleofman.com
14%
9.60
02AI Rankings Matrix

Visibility coverage: where you appear vs. competitors

Real buyer prompts, run against 5 AI engines — ChatGPT, Gemini, Perplexity, Claude, and Google AI Overviews. Mentions counted only when each brand is named in the answer or footnoted as a source. Data captured 2026-04-27 via Bright Data LLM dataset APIs (ChatGPT, AIO,...

ChatGPT
GPT-5
90%you
vs 60% Malta (MGA) · +30 pp gap
Gemini
3 Flash
100%you
vs 70% Malta (MGA) · +30 pp gap
Perplexity
Sonar
100%you
vs 50% Malta (MGA) · +50 pp gap
Claude
Sonnet 4.5
80%you
vs 60% Malta (MGA) · +20 pp gap
Google AIO
AI Overviews
100%you
vs 50% Malta (MGA) · +50 pp gap
AI coverage matrix
All 10 prompts shown
YouMalta (MGA)GibraltarCuraçao Gaming ControlAlderney (AGCC)
#
Prompt
ChatGPT
Gemini
Perplexity
Claude
Google AIO
1
Vendor Evaluation
Best jurisdictions for licensing an online gambling or sports betting operator in 2026
2
Vendor Evaluation
Compare Isle of Man, Malta, Gibraltar and Alderney for an online casino licence
3
Vendor Evaluation
Which jurisdiction is best for a B2B sportsbook platform provider serving multiple regulated markets
4
Operational and Risk
Operational risks of running a remote gambling business from the Isle of Man
5
Operational and Risk
Banking and payment processing options for licensed gambling operators in offshore jurisdictions
6
Operational and Risk
How does the Isle of Man Gambling Supervision Commission handle AML and player protection
7
Compliance and Tax
Tax rates and corporate structure for an iGaming company licensed in the Isle of Man
8
Compliance and Tax
Which gambling jurisdictions allow cryptocurrency wagering and what are the compliance requirements
9
Infrastructure and Setup
Steps to obtain a full Isle of Man online gambling licence and timeline
10
Infrastructure and Setup
Best data centres and hosting for online gambling operators in the Isle of Man

AI Coverage Leaderboard

i
Across all 10 prompt × 5 model cells
Brand
GAIO Score
Avg Rank
1.
Isle of Man
94%
1.30
2.
Malta (MGA)
58%
2.10
3.
Gibraltar
46%
2.80
4.
Curaçao Gaming Control
40%
3.20
5.
Alderney (AGCC)
16%
4.10
03AI Crawlability Audit

Identify risk areas for your AI SEO Crawl strategy

Before your content can be cited, it has to be crawled and read. We tested five layers: 1) What your robots.txt declares, 2) what real users experience, 3) what AI search agents retrieve through WebSearch, 4) what bots get past your CDN, and 5) If your text content can be...

PHASE 1

Robots.txt analysis

2 critical bots blocked

What your robots.txt declares to each AI crawler, and which bots are allowed, blocked, or partially restricted.

RiskHighCrawlers24Allowed22Blocked2Partial0
robots.txt200· 15 linesFetched 2026-04-27 15:44 UTC
🚨Key risks flagged
🛠
🔍
allowed!partialblocked
Bot
Provider
Role
Status
Rule applied
GPTBot
OpenAI
Training crawler
Allow
No robots rule — defaults to fully crawlable
ChatGPT-User
OpenAI
User-triggered fetch
Allow
No robots rule — defaults to fully crawlable
OAI-SearchBot
OpenAI
Search indexing
Allow
No robots rule — defaults to fully crawlable
ClaudeBot
Anthropic
Training crawler
Block
WAF returned 408 timeout in bot impersonation
Claude-User
Anthropic
User-triggered fetch
Allow
Reaches site but live LLM web-search still WAF-rejected
Claude-SearchBot
Anthropic
Search indexing
Block
WAF returned 408 timeout in bot impersonation
anthropic-ai
Anthropic (legacy)
Allow
No robots rule
Google-Extended
Google
Training opt-out flag
Allow
No robots rule
GoogleOther
Google
General fetcher
Allow
No robots rule
PerplexityBot
Perplexity
Search indexing
Allow
Accessible — Perplexity is the only LLM that consistently fetches the site live
Perplexity-User
Perplexity
User-triggered fetch
Allow
Accessible
CCBot
Common Crawl
Training crawler
Allow
No robots rule
Bytespider
ByteDance
Training crawler
Allow
No robots rule
Meta-ExternalAgent
Meta
Search indexing
Allow
No robots rule
Meta-ExternalFetcher
Meta
Search indexing
Allow
No robots rule
Applebot-Extended
Apple
Search indexing
Allow
No robots rule
Amazonbot
Amazon
Search indexing
Allow
No robots rule
DuckAssistBot
DuckDuckGo
Search indexing
Allow
No robots rule
Diffbot
Diffbot
Search indexing
Allow
No robots rule
Omgilibot
Webz.io
Search indexing
Allow
No robots rule
FriendlyCrawler
FriendlyCrawler
Search indexing
Allow
No robots rule
ImagesiftBot
Imagesift
Search indexing
Allow
No robots rule
Cohere-ai
Cohere
Training crawler
Allow
No robots rule
Timpibot
Timpi
Search indexing
Allow
No robots rule
PHASE 2

Virtual user crawl test

1 probe returned non-200

Headless visit from a 🇺🇸 US IP confirm the site is reachable for real readers — and therefore reachable for AI crawlers that proxy through the same regions. This is a sanity check, not a deep audit.

🇺🇸USsuccess
Accessible from US IP — apex 301 redirect resolves to www. host with HTTP 200.
301 HTTPblocked: false
What this test returns6 fields per country
{
  "countryCode": "US",
  "status":      "success",
  "blocked":     false,
  "statusCode":  200,
  "error":       "",
  "summary":     "✅ Accessible from US IP"
}
The 6 fields
countryCodeISO 3166-1 alpha-2 country the test ran from
statusHigh-level outcome: success / failed / error
blockedWhether the site rejected the visitor (geo or anti-bot)
statusCodeHTTP status from the origin (e.g. 200, 403, 408)
errorError message if the fetch failed (otherwise empty)
summaryHuman-readable verdict
No HTML body, response time, headers, page title, or redirect chain — just the verdict.
PHASE 3

LLM web-search access

1 reachable · 3 not reachable

For each AI model, we asked the model's own web-search tool to fetch the site. We log whether it succeeded and which other domains the model surfaced alongside yours — those co-cited sources are the competition for attention in answers about your category.

Provider
Model
Status
Co-cited sources
Notes
OpenAI
gpt-5.4
Not reachable
none — fetched directly
WAF rejected the live page request on the second crawl pass; OpenAI returned an empty heading and summary. (Note: an earlier crawl during this audit window did succeed for OpenAI — the WAF behaviour appears bursty and timing-dependent, not deterministic.) Despite this, in our 5-engine prompt run via Bright Data resi-IP, ChatGPT returned full answers and mentioned the Isle of Man in 9 of 10 prompts — confirming the brand is well-represented in training data even when live fetch is intermittent.
Anthropic
claude-sonnet-4-6
Not reachable
none — fetched directly
WAF returned 'Request Rejected' (support ID logged) on both audit attempts. Claude users asking about Isle of Man iGaming will receive answers built from the model's prior training plus whatever cached SERP snippets it can find — your live page never enters the loop. In our direct Anthropic API run with web_search enabled (claude-sonnet-4-5), Claude still mentioned IoM 129 times across 10 prompts (the highest of any engine), but cited third-party domains — slotegrator.pro, gofaizen-sherle.com, applebyglobal.com — rather than digitalisleofman.com, because the WAF blocks Claude's fetch.
Google
gemini-3.1-flash-lite-preview
Not reachable
digitalisleofman.com (URL flagged URL_RETRIEVAL_STATUS_SUCCESS but body was the WAF rejection page)
Gemini's URL retrieval reports SUCCESS metadata but the actual content is a WAF challenge page. The model has no real content to ground its answers in. In our Bright Data Gemini run, the engine still mentioned IoM 90 times across 10 prompts and recommended IoM in 3 prompts — strong representation, but the lack of direct site access means competitive narrative is shaped by third-party comparison sites, not your own pages. Google AI Overviews behaved similarly with 60 mentions.
Perplexity
sonar
Reachable
https://www.digitalisleofman.com/about-us/https://www.digitalisleofman.com/digital-isle/https://www.digitalisleofman.com/events/digital-isle-2025/https://www.youtube.com/@digitalisleofman1912https://www.gov.im/news/2025/oct/16/digital-isle-of-man-to-lead-early-work-to-establish-national-ai-office/https://www.eventbrite.com/o/digital-isle-of-man-17706237085https://open.spotify.com/show/4uvj2kfwl9Q6RGEiYe8nQ6https://www.iomdfenterprise.im/media/fxxl4uvf/diom-2026-programme-draft-web.pdf
Perplexity is the only major LLM reliably citing digitalisleofman.com directly. Heading and summary match ground truth. Critically, Perplexity's co-cited set leans on /about-us/ rather than /igaming/ — your iGaming proposition is being framed by your corporate page, not your industry page.
PHASE 4

Bot impersonation test

2 critical bots inaccessible

We sent requests using each bot's exact User-Agent string. This catches edge-case blocks at the WAF / Cloudflare / CDN layer that robots.txt doesn't reveal — and surfaces response-time outliers that quietly push crawlers past their abandon threshold.

Bot
Status
HTTP
Response time
oai-searchbot
accessible
200
17,200ms⚠️
chatgpt-user
accessible
200
13,300ms⚠️
gptbot
accessible
200
16,900ms⚠️
chatgpt-agent
blocked
301
6,000ms
perplexitybot
accessible
200
14,500ms⚠️
perplexity-user
accessible
200
15,300ms⚠️
googlebot
accessible
200
33,700ms
googlebot-smartphone
accessible
200
13,300ms⚠️
bingbot
accessible
200
14,900ms⚠️
bing-copilot
accessible
200
13,000ms⚠️
claudebot
blocked
408
22,500ms
claude-user
accessible
200
16,600ms⚠️
claude-searchbot
blocked
408
21,100ms
grok
accessible
200
16,400ms⚠️
deepseek
accessible
200
12,200ms⚠️
Two patterns to fix: (1) bingbot times out where every other bot succeeds — likely a CDN rule treating the legacy Bing user-agent as suspicious, while the modern bing-copilot sails through. (2) Slow bots (10s+) respond, but their response times are 5–10× the typical ~2s baseline. Most LLM crawlers abandon at 3–5s, so those bots are likely truncating or skipping your pages even when the HTTP says 200.
PHASE 5

Indexability · token depth

Majority of pages healthy

Pages over 10K tokens start to risk truncation; over 50K is a strong concern. Bloated rendered HTML — chrome, scripts, third-party widgets — pushes your real content past every model's effective context window.

Page
10K50K100K
Tokens
Status
iGaming Homepage
/igaming/
15.4K
At risk
iGaming Licences
/igaming/igaming-licences/
7.5K
Healthy
About Digital Isle of Man
/about-us/
8.7K
Healthy
Why Regulatory Credibility (licensee editorial)
/news/why-regulatory-credibility-gives-the-isle-of-man-its-competitive-edge-in-2026/
8.9K
Healthy
Newsroom Index
/news/
7.9K
Healthy
Why these pages are heavy1 explanations
iGaming Homepage · /igaming/
~15.4k total tokens. <main> opens at ~3,340 tokens deep — a 3.3k-token wall of nav, hero markup, and inline CSS sits ahead of the actual jurisdictional pitch. LLMs that truncate at 4k–8k tokens will see your value props, but lose the FAQ block and licence-process detail at ~13.5k.
04Sentiment Snapshot

Brand perception: how AI models describe you to buyers

Buyer prompts grouped by intent cluster. Sentiment is read directly from each engine's answer — 'recommend' if the engine names IoM as a top choice, 'mention' if named without endorsement, 'absent' if not mentioned. Real data from 50 engine answers captured 2026-04-27.

Vendor Evaluation
3 prompts · 12 model responses analysed
Neutral

This is the WEAK cluster for IoM — the only one where Malta clearly leads. On 'best jurisdictions 2026' (P1) IoM was missed by ChatGPT and Claude (Malta, Curaçao, and Gibraltar were the dominant trio). On the explicit 'compare IoM/Malta/Gib/Alderney' prompt (P2) all 5 engines named all 4 jurisdictions — a tie. On 'best B2B sportsbook' (P3) IoM appeared in 5/5 engines, but so did Malta, Gibraltar, and Curaçao. Cluster cell coverage: IoM 12/15, Malta 15/15, Gib 13/15, Curaçao 10/15. Recommendation counts: Malta 4, IoM 4, Gibraltar 3, Alderney 3, Curaçao 2 — IoM ties Malta on recommendations but loses on sheer mention frequency. Owned content cited: isleofmangsc.com (8 of 15 cells), digitalisleofman.com (4 of 15). Third-party narrative-shapers: gofaizen-sherle.com, slotegrator.pro, applebyglobal.com.

Compliance and Tax
2 prompts · 8 model responses analysed
Neutral

Strong neutral cluster — IoM mentioned in 9 of 10 cells (Malta 5, Gibraltar 3, Curaçao 4) but no engine actively recommended IoM here. On 'IoM tax structure' (P7) all 5 engines named IoM but the cited sources were applebyglobal.com, iclg.com, the-emgroup.com, gov.im, and lewissilkin.com — third-party legal/consulting content owned the tax narrative. On 'crypto wagering compliance' (P8) IoM appeared in 5/5 engines alongside Malta and Curaçao, but the citations were globallawexperts.com, slotegrator.pro, and quadrant.global. Opportunity: ship a /tax-structure/ and /crypto-licence/ page to convert these neutral mentions into recommend-grade language and shift citations from third-party advisers back to your own domain.

Operational and Risk
3 prompts · 12 model responses analysed
Positive

IoM is DOMINANT in this cluster. Cell coverage: IoM 13/15, Malta 6/15, Curaçao 4/15, Gibraltar 4/15, Alderney 0/15. On the IoM-specific operational-risk prompt (P4) all 5 engines answered with IoM at the centre and competitors absent. On AML/player protection (P6) again all 5 engines named IoM and only one (ChatGPT) mentioned Malta in passing. The August 2024 FATF risk-rating uplift narrative we expected to dominate did NOT show up materially — engines instead cited isleofmangsc.com (37 total citations across the audit), the GSC's enforcement-strategy PDFs on consult.gov.im, and Appleby/EM Group legal commentary. Headline finding: AI engines describe IoM's GSC as a serious AML regulator, not a risk-rated one. The August 2024 uplift coverage is a Section 01 problem (newsroom-index page is invisible) not a sentiment problem.

Infrastructure and Setup
2 prompts · 8 model responses analysed
Positive

Second-strongest cluster for IoM. Cell coverage: IoM 10/10, Malta 3/10, Gib 2/10, Curaçao 2/10, Alderney 0/10. On 'IoM licence steps' (P9) and 'best data centres for IoM operators' (P10) IoM was named in all 10 cells; engines explicitly recommended IoM in 2 cells (Gemini and Perplexity for licence-steps content). The data-centre prompt drew citations to isleofmandatacentre.com (8×), continent8.com (14× across the full audit), and the digitalisleofman.com /digital-isle/ pages. Strong base, but again the recommendation language comes from third parties — there is room for a step-by-step /licence-application-process/ page that owns the procedural answer engines reach for.

Sentiment leaderboard

Share of voice across 10 prompts × 4 models
PosNeuAbs
1.
Isle of Manyou
7 · 3 · 0
2.
Malta (MGA)
3 · 5 · 2
3.
Gibraltar
2 · 5 · 3
4.
Curaçao Gaming Control
2 · 4 · 4
5.
Alderney (AGCC)
2 · 1 · 7

Frequently asked

What is a GAIO Deficit Report?

GAIO stands for Generative AI Optimization — getting your brand cited inside AI answers, not just ranked on a results page. The Deficit Report is RankBee's diagnostic: across leading AI engines (ChatGPT, Gemini, Perplexity, Claude, and Google AI Overviews) and a tailored prompt set, it shows which answers your brand is missing from, which competitors take the citation in your place, and the technical and content reasons why.

Who is this for?

Anyone whose audience now turns to ChatGPT, Gemini, Perplexity or Claude before making a decision. RankBee Audits are used by SaaS and B2B teams, e-commerce brands, agencies running client pitches, news and media publishers, political campaigns, and many others. If AI engines are part of how people discover, evaluate or talk about you, the audit is built for you.

How is this different from a traditional SEO audit?

A traditional audit grades you on Google's signals — backlinks, keywords, Core Web Vitals. RankBee grades you on what large language models actually reason about: entities, attributes, answer-first structure, citation-worthiness, and crawlability through the bot stack AI assistants use today (GPTBot, ClaudeBot, PerplexityBot, Google-Extended and 20 more). Strong Google rankings don't automatically translate into AI citations, and that gap is what the audit measures.

How does the audit work?

Four sections, each grounded in real data. Crawlability runs five technical phases: robots.txt rules, virtual-user probes from your target geographies, live LLM web-search fetches, bot-impersonation against your CDN, and token-depth indexability. Rankings Matrix runs your buyer prompts against up to 5 AI engines and logs every citation, co-citation, and competitor mention. Content Scorecard simulates AI ranking at the page level — RankBee ingests competitor content, generates variations, and scores yours 1–10 on the attributes models actually reward. Sentiment Snapshot reads how engines describe you when they do mention you, clustered by audience intent.

Where do the prompts come from?

RankBee discovers them for you. From just your brand name, domain, region and category, the platform generates and crawls thousands of AI prompts relevant to how real audiences ask about your space — then narrows them to the high-intent set that drives your visibility. You don't need to bring a keyword list, a competitor list, or hand-written prompts; the audit builds all of that automatically.

What does "invisible to AI" actually mean?

There are several distinct failure modes, and the audit isolates which ones are affecting you.

  • Uncrawlable. Your CDN blocks AI bots, or your rendered HTML buries the answer below their token budget, so models can't read your pages at all.
  • Crawlable but uncited. Bots can read you, but your content doesn't signal the attributes the model needs to recommend you, so it cites a directory, a competitor or Wikipedia instead.
  • Cited but mis-framed. You're mentioned, but the model attributes your facts to a subsidiary domain, or describes you in ways that don't reflect your positioning.
  • Locked out of live retrieval. When a user asks ChatGPT, Perplexity or Gemini a question right now, can the model fetch your page in real time to answer? The crawlability audit tests this end-to-end — many sites pass robots.txt but fail at the CDN or render layer, so live retrieval silently fails.
  • Excluded from training data. Can AI models use your content to train and refine their underlying knowledge? Your robots.txt and bot policies decide whether crawlers like GPTBot, ClaudeBot, Google-Extended and CCBot are allowed to ingest you. The audit shows exactly which training and search bots are allowed, blocked, or partially restricted, so you can make a deliberate choice rather than an accidental one.
How long does it take, and what do I need to provide?

Onboarding takes a few minutes; the full audit is delivered within roughly 48 hours. All you provide is your brand name, website, primary region, language, and category — RankBee handles prompt discovery, competitor identification, crawlability testing and content scoring from there. Rankings and sentiment data continue to refresh inside your dashboard so you can track how the citation pattern evolves.

What happens after the report — does it fix the issues?

The audit diagnoses; remediation happens in the rest of the platform. Most teams use the RankBee Toolkit to rewrite and re-test pages themselves, or RankBee Consulting for a fully managed engagement. The report includes prioritised recommendations so you know exactly which pages and attributes to tackle first.

Can I share the report with my team and stakeholders?

Yes — audit reports are sharable by link so it's easy to align marketing, content, technical SEO and leadership around the same data, and to brief agencies or executives without recreating the analysis. Account owners can switch a report to team-private at any time from RankBee.

How do I get a full audit?
Full audits are available to RankBee subscribers. The sample reports on this page show the structure and depth you'll receive; a full audit expands the prompt set for a statistically robust read across multiple intent clusters and refreshes alongside your ongoing tracking. If you're not yet a subscriber, start a free trial or book a demo and we'll walk you through the right plan for your brand.
Want this for your brand?

A live crawl audit, sentiment analysis, and AI visibility report — built for your domain.

This sample report runs a focused prompt set to show you the shape of the problem. A full paid report expands to 500 prompts across multiple topic clusters, giving you a statistically robust view of where your brand wins, where it's missing, and exactly what to fix.

Prepared by RankBee·rankbee.ai·RB-2026-04-IOM-0427