Public sample

AI Visibility and Tech Audit for
B2i Technologies

Generative AI Indexing Optimisation audit for the US IR-website market. 5 pages scored against 10+ live competitor URLs, 10 buyer prompts modelled across 5 AI engines (50 cells), 4 sentiment clusters, 25-bot crawl audit and bot-impersonation matrix.

Default visibility: public. Anyone with the link can read this report. Sign in to your RankBee account to make it private to your team.
B2
B2i Technologies
b2itech.com
GeneratedMay 14, 2026
Audit windowLast 14 days
Report IDGAIO-B2I-2026-05
What's in this report

Four sections covering technical access, AI visibility, content, and reputation.

This is more than a crawl audit. We measure where your buyers go to find you, what AI says when they ask, and what's missing from your story.

Content scorecard
SECTION 01
Where B2i pages win and lose against live competitor URLs

5 strategic B2i pages scored 1-10 against the live competitor leaderboard for each query. Strongest finishes on pricing (rank 2/11) and hosted IR websites (rank 2/11); weakest on the SEO/LLM optimisation page (rank 11/11) and the DataAnywhere widget page (rank 5/11).

16%of 100
Rank #11, −23 vs leader
Rankings matrix
SECTION 02
How often B2i shows up across ChatGPT, Gemini, Perplexity, Claude and AI Overviews

10 buyer prompts × 5 engines = 50 cells. B2i appears in 9 cells (18%) with 72 mentions — the highest mentions-per-cell ratio of any vendor. Concentrated in the head-to-head comparison (P2) and the widget/plugin prompt (P9); totally absent from 7 of 10 prompts.

4%of 45 prompt × model cells
Rank #4 · 4% cited
Crawlability
SECTION 03
Robots.txt is exemplary — but the WAF blocks the bots robots.txt allows

Every major AI bot is explicitly Allowed in robots.txt, yet the edge times out 8 of 15 tested bots (GPTBot, ClaudeBot, Claude-SearchBot, Bing Copilot, Grok, ChatGPT-User, ChatGPT-Agent, Googlebot-Smartphone). Query-time LLM fetches still succeed for OpenAI, Anthropic, Gemini and Perplexity.

5 / 5pages reachable
1 urgent
Sentiment
SECTION 04
Recommended on 3 of 10 prompts, completely missing on 7

When buyers ask the head-to-head (P2) or the open 'best provider' (P1), B2i is recommended. When they ask about widgets (P9), Claude and AI Overviews recommend B2i. On the other 7 prompts — small-cap vendors, switching risk, SEC disclosure, uptime, Reg FD, WCAG, migration — B2i is absent from every cell.

2of 4 clusters need attention
Bimodal
01Content Scorecard

Content scorecard

Each of B2i's 5 strategic pages scored 1-10 by RankBee against the live competitor leaderboard returned for the page's target queries. Scores compare directly to the URLs AI engines are actually retrieving when buyers ask these questions today.

Page-by-page scoring
As % · 5 pages graded
16% your avg39% leader avg
Page
Your score
Leader
Δ
Homepage
https://b2itech.com/
14%
26%
https://www.notified.com/IR/ir-websites
12%
Hosted IR Websites
https://b2itech.com/hosted-websites/
21%
31%
https://www.newmediawire.com/investor-relations/ir-suite
11%
DataAnywhere / IROffice
https://b2itech.com/dataanywhere-and-iroffice/
14%
39%
https://eodhd.com/lp/stock-widget
25%
Pricing
https://b2itech.com/pricing/
25%
29%
https://sourceforge.net/software/investor-relations-website-builder/
4%
SEO / LLM Optimization
https://b2itech.com/seo/
10%
21%
https://backend.pubcoinsight.com/for-companies/
11%

Content quality leaderboard

i
Weighted average across audited pages
Brand
GAIO Score
Avg Rank
1.
EODHD
39%
8.40
2.
NewMediaWire
31%
8.40
3.
SourceForge
29%
8.00
4.
Gartner
23%
8.60
5.
PubcoInsight
21%
7.80
6.
Real Chemistry
21%
8.00
7.
Stakeholder Labs
21%
8.20
8.
Trizcom
20%
8.40
9.
Notified
19%
4.40
11.
B2i Technologies
16%
4.60
02AI Rankings Matrix

AI engine rankings matrix

10 buyer prompts run live across 5 AI engines (ChatGPT, Gemini, Perplexity, Claude, Google AI Overviews) = 50 cells. Each dot = a brand mention in that engine's response. Values are real mention counts parsed from the engines' actual answers and citation lists.

ChatGPT
GPT-5.4
0%you
vs 67% Q4 Inc · -67 pp gap
Gemini
Gemini 3.1 Flash
0%you
vs 78% Q4 Inc · -78 pp gap
Perplexity
Sonar
0%you
vs 11% Q4 Inc · -11 pp gap
Claude
Sonnet 4.5
22%you
vs 33% Q4 Inc · -11 pp gap
AI Overviews
Google
22%you
vs 56% Q4 Inc · -34 pp gap
AI coverage matrix
All 10 prompts shown
YouQ4 Inc (leader)NotifiedEquisolveNasdaq IR Insight
#
Prompt
ChatGPT
Gemini
Perplexity
Claude
AI Overviews
1
Vendor evaluation
Best IR website provider for public companies in 2026
2
Vendor evaluation
Q4 Inc vs Notified vs B2i Technologies vs Issuer Direct — which IR website platform is best?
3
Vendor evaluation
Top IR website vendors for small-cap and mid-cap NYSE/Nasdaq companies
4
Operational risk
Operational risks of switching IR website providers mid-year
5
Operational risk
How public companies handle SEC disclosure timing (8-K, 10-Q, 10-K) on their IR site
6
Operational risk
IR website uptime/reliability during earnings — what standards apply
7
Compliance & disclosure
Reg FD and SEC compliance requirements for investor relations websites
8
Compliance & disclosure
ADA and WCAG 2.2 accessibility compliance for public-company IR websites
9
Infrastructure & onboarding
Best real-time stock data widgets and plugins for WordPress IR sites
10
Infrastructure & onboarding
How long does it take to launch a new IR website for a public company — migration steps

AI Coverage Leaderboard

i
Across 45 prompt × model cells (generic prompts only)
Brand
GAIO Score
Avg Rank
1.
Q4 Inc
16%
1.89
2.
Notified
16%
1.89
3.
Equisolve
16%
2.67
4.
Nasdaq IR Insight
9%
3.78
5.
EQS Group
7%
4.11
6.
B2i Technologies
4%
4.00
7.
Broadridge
2%
4.44
8.
VendorGroup
2%
4.56
9.
Kaleidoscope
2%
4.56
10.
Issuer Direct
0%
03AI Crawlability Audit

Crawlability & bot access

Whether AI bots can actually reach b2itech.com — measured across robots.txt declarations, virtual-user probes from US, real bot impersonation across 15 user agents, query-time LLM websearch and content depth on the 5 scored pages.

PHASE 1

Robots.txt analysis

Permissive — all bots allowed

What your robots.txt declares to each AI crawler, and which bots are allowed, blocked, or partially restricted.

RiskLowCrawlers15Allowed15Blocked0Partial0
robots.txt200· 42 lines2026-05-14 11:56 UTC
🚨Key risks flagged
🛠
🔍
allowed!partialblocked
Bot
Provider
Role
Status
Rule applied
GPTBot
OpenAI training crawler
Allow
Allow
robots.txt allows; WAF timeouts at edge
ChatGPT-User
OpenAI on-demand fetcher
Allow
Allow
robots.txt allows; WAF times out (408)
OAI-SearchBot
OpenAI search index
Allow
Allow
Allowed and reachable
ClaudeBot
Anthropic training crawler
Allow
Allow
robots.txt allows; WAF times out (408)
Claude-User
Anthropic on-demand fetcher
Allow
Allow
Allowed and reachable
Claude-SearchBot
Anthropic search index
Allow
Allow
robots.txt allows; WAF times out (408)
anthropic-ai
Anthropic legacy UA
Allow
Allow
Allowed
Google-Extended
Google AI training
Allow
Allow
Allowed; Googlebot reachable, Smartphone times out
PerplexityBot
Perplexity crawler
Allow
Allow
Allowed and reachable (slow ~36s)
Perplexity-User
Perplexity on-demand fetch
Allow
Allow
Allowed and reachable (slow ~34s)
CCBot
Common Crawl
Allow
Allow
Allowed
Applebot-Extended
Apple AI training
Allow
Allow
Allowed
Amazonbot
Amazon AI training
Allow
Allow
Allowed
Bytespider
ByteDance training
Allow
Allow
Allowed
DuckAssistBot
DuckDuckGo AI
Allow
Allow
Allowed
PHASE 2

Virtual user crawl test

1 probe — 200 OK

Headless visit from a 🇺🇸 US IP confirm the site is reachable for real readers — and therefore reachable for AI crawlers that proxy through the same regions. This is a sanity check, not a deep audit.All checks OK — click to expand

🇺🇸USsuccess
Accessible from US residential IP — HTTP 200, full page render, no challenge.
200 HTTPblocked: false
What this test returns6 fields per country
{
  "countryCode": "US",
  "status":      "success",
  "blocked":     false,
  "statusCode":  200,
  "error":       "",
  "summary":     "✅ Accessible from US IP"
}
The 6 fields
countryCodeISO 3166-1 alpha-2 country the test ran from
statusHigh-level outcome: success / failed / error
blockedWhether the site rejected the visitor (geo or anti-bot)
statusCodeHTTP status from the origin (e.g. 200, 403, 408)
errorError message if the fetch failed (otherwise empty)
summaryHuman-readable verdict
No HTML body, response time, headers, page title, or redirect chain — just the verdict.
PHASE 3

LLM web-search access

4 of 4 reachable

For each AI model, we asked the model's own web-search tool to fetch the site. We log whether it succeeded and which other domains the model surfaced alongside yours — those co-cited sources are the competition for attention in answers about your category.All checks OK — click to expand

Provider
Model
Status
Co-cited sources
Notes
OpenAI (gpt-5.4)
gpt-5.4
Reachable
none — fetched directly
Fetched b2itech.com via web_search tool and returned a heading/summary matching the live page. Did NOT cite b2itech.com inline — answered from synthesised content. Contrast: the bot impersonation matrix shows GPTBot and ChatGPT-User both blocked at the WAF, yet ChatGPT can still answer accurately at query-time because OAI-SearchBot (which IS reachable) feeds its live search results.
Anthropic (claude-sonnet-4-6)
claude-sonnet-4-6
Reachable
none — fetched directly
Fetched and summarised correctly via web_search. Did not cite the URL inline. Same pattern as OpenAI: ClaudeBot and Claude-SearchBot are blocked at the WAF, but Claude-User passes through (4.1s, 200) and powers accurate live answers.
Gemini (gemini-3.1-flash-lite-preview)
gemini-3.1-flash-lite-preview
Reachable
b2itech.com
Gemini is the only engine that cited b2itech.com directly inline. Googlebot core is reachable; Googlebot-Smartphone is timing out — fix the mobile-UA path so Gemini can index the responsive experience without falling back to desktop crawl.
Perplexity (sonar)
sonar
Reachable
b2itech.comb2idigital.comgetlatka.comeasyleadz.com
Reachable and content-matched, but Perplexity blends 3 third-party sources alongside b2itech.com — including b2idigital.com (an imitator vendor) and getlatka.com (a B2i-vs-Notified comparison aggregator). PerplexityBot itself is reachable but slow (~36s); the citation share leak is the bigger issue.
PHASE 4

Bot impersonation test

4 critical bots inaccessible

We sent requests using each bot's exact User-Agent string. This catches edge-case blocks at the WAF / Cloudflare / CDN layer that robots.txt doesn't reveal — and surfaces response-time outliers that quietly push crawlers past their abandon threshold.

Bot
Status
HTTP
Response time
oai-searchbot
accessible
200
4,400ms
chatgpt-user
blocked
408
40,000ms (timeout)
gptbot
blocked
408
40,000ms (timeout)
chatgpt-agent
blocked
408
40,000ms (timeout)
perplexitybot
accessible
200
35,900ms⚠️
perplexity-user
accessible
200
34,000ms⚠️
googlebot
accessible
200
4,500ms
googlebot-smartphone
blocked
408
40,000ms (timeout)
bingbot
accessible
200
17,600ms⚠️
bing-copilot
blocked
408
40,000ms (timeout)
claudebot
blocked
408
40,000ms (timeout)
claude-user
accessible
200
4,100ms
claude-searchbot
blocked
408
40,000ms (timeout)
grok
blocked
408
40,000ms (timeout)
deepseek
accessible
200
19,600ms⚠️
Patterns to investigate: Review any blocked or slow bots above — bots responding in 10s+ are likely truncating or skipping your pages even when the HTTP says 200. Most LLM crawlers abandon at 3–5s. Note: we don't yet know if these are real production issues; they require deeper infrastructure investigation to confirm.
PHASE 5

Indexability · token depth

Majority of pages healthy

Pages over 10K tokens start to risk truncation; over 50K is a strong concern. Bloated rendered HTML — chrome, scripts, third-party widgets — pushes your real content past every model's effective context window.All checks OK — click to expand

Page
10K50K100K
Tokens
Status
Homepage
https://b2itech.com/
4.5K
Healthy
Hosted IR Websites
https://b2itech.com/hosted-websites/
3.2K
Healthy
DataAnywhere / IROffice
https://b2itech.com/dataanywhere-and-iroffice/
2.8K
Healthy
Pricing
https://b2itech.com/pricing/
4.2K
Healthy
SEO / LLM Optimization
https://b2itech.com/seo/
2.6K
Healthy
04Sentiment Snapshot

Sentiment across the 4 buyer conversations

Each cluster groups the prompts that share a buyer mindset. Sentiment is parsed from the real 50-cell engine responses — positive = explicitly recommended by at least one engine, neutral = mentioned without recommendation language nearby, absent = the brand never appears.

Operational risk
3 prompts · 12 model responses analysed
Absent

B2i scored 0 of 15 cells across switching risk (P4), SEC disclosure timing 8-K/10-Q/10-K (P5) and earnings-day uptime (P6). Engines answered P4 with generic risk-management content (no vendor named in 24 of 25 cells); P5 was dominated by Q4 and Notified citing their disclosure workflows, with Equisolve and EQS Group surfacing once each on AIO; P6 went to Q4 and Notified with Nasdaq IR Insight and Equisolve in supporting cells. B2i's homepage claim of '99.9% uptime, 25 years strong' did not register — there is no SLA documentation, no earnings-day case study, no migration runbook published in a form the engines can cite. This is mid-funnel content the buyer can use against B2i in a structured RFP.

Compliance & disclosure
2 prompts · 8 model responses analysed
Absent

B2i is at 0 of 10 cells across Reg FD (P7) and ADA/WCAG 2.2 (P8). Equisolve owns this cluster outright: 5 mentions in P7 from AI Overviews on the back of its public WCAG/VPAT and Reg FD documentation, and 7 mentions across 3 engines in P8. Q4, Notified and Nasdaq IR Insight pick up neutral mentions. The RankBee content scorecard confirms the underlying gap — the /hosted-websites/ page rewrite spec from task 372 explicitly calls out missing SEC/EDGAR/XBRL specifics, security (SOC 2, SSL, MFA, uptime SLA) and WCAG/VPAT sections. Adding a public compliance & accessibility hub is the highest-leverage move B2i can make to enter this conversation.

Vendor evaluation
3 prompts · 12 model responses analysed
Positive

B2i covers 7 of 15 cells (47%) and is explicitly recommended in 6 of those 7. The head-to-head P2 prompt drives the result: every engine recommended B2i alongside Q4, Notified and Issuer Direct (5/5 cells, 45 mentions — the highest single-prompt mention count of any brand). On the open 'best provider 2026' P1, Claude and AI Overviews both recommended B2i with 5 and 4 mentions respectively — but ChatGPT, Gemini and Perplexity all returned 0 cells, defaulting to Q4 / Notified / Irwin / Equisolve. The contradiction with the calibrated expectation is P3 (top vendors for small-cap / mid-cap NYSE-Nasdaq): B2i scored 0 cells out of 5 — the engines recommended Q4, Notified, Nasdaq IR Insight, Equisolve and Broadridge instead. B2i has no dedicated small-cap positioning page that the engines could index, so it falls out of the slot it should arguably own.

Infrastructure & onboarding
2 prompts · 8 model responses analysed
Positive

Split cluster: B2i wins on widgets (P9) and is absent on migration (P10). On P9 (real-time stock data widgets / WordPress IR plugins), Claude (8 mentions) and AI Overviews (10 mentions) both recommended B2i; Q4 was second with 3 cells / 8 mentions; Kaleidoscope picked up 1 cell via Claude. ChatGPT, Gemini and Perplexity all answered P9 with generic stock-data API content (EODHD, Polygon, FMP) that doesn't include any IR-website vendor — a citation-share gap to close. On P10 (launch + migration steps), B2i scored 0 cells: engines cited Q4, Notified, Equisolve and EQS Group with operational specifics B2i doesn't publish (typical timelines, data-feed mapping, DNS/hosting transfer). Publishing a public migration timeline and a 'WordPress + DataAnywhere quickstart' would convert two zero-cell prompts (P10 and parts of P9) into B2i recommendations.

Sentiment leaderboard

Share of voice across 10 prompts × 4 models
PosNeuAbs
1.
Q4 Inc
4 · 4 · 2
2.
Equisolve
4 · 4 · 2
3.
Notified
3 · 5 · 2
4.
B2i Technologiesyou
3 · 0 · 7
5.
Nasdaq IR Insight
2 · 2 · 6
6.
Issuer Direct
1 · 0 · 9
7.
Broadridge
1 · 0 · 9
8.
EQS Group
0 · 3 · 7
9.
VendorGroup
0 · 1 · 9
10.
Kaleidoscope
0 · 1 · 9

Frequently asked

What is a GAIO Deficit Report?

GAIO stands for Generative AI Optimization — getting your brand cited inside AI answers, not just ranked on a results page. The Deficit Report is RankBee's diagnostic: across leading AI engines (ChatGPT, Gemini, Perplexity, Claude, and Google AI Overviews) and a tailored prompt set, it shows which answers your brand is missing from, which competitors take the citation in your place, and the technical and content reasons why.

Who is this for?

Anyone whose audience now turns to ChatGPT, Gemini, Perplexity or Claude before making a decision. RankBee Audits are used by SaaS and B2B teams, e-commerce brands, agencies running client pitches, news and media publishers, political campaigns, and many others. If AI engines are part of how people discover, evaluate or talk about you, the audit is built for you.

How is this different from a traditional SEO audit?

A traditional audit grades you on Google's signals — backlinks, keywords, Core Web Vitals. RankBee grades you on what large language models actually reason about: entities, attributes, answer-first structure, citation-worthiness, and crawlability through the bot stack AI assistants use today (GPTBot, ClaudeBot, PerplexityBot, Google-Extended and 20 more). Strong Google rankings don't automatically translate into AI citations, and that gap is what the audit measures.

How does the audit work?

Four sections, each grounded in real data. Crawlability runs five technical phases: robots.txt rules, virtual-user probes from your target geographies, live LLM web-search fetches, bot-impersonation against your CDN, and token-depth indexability. Rankings Matrix runs your buyer prompts against up to 5 AI engines and logs every citation, co-citation, and competitor mention. Content Scorecard simulates AI ranking at the page level — RankBee ingests competitor content, generates variations, and scores yours 1–10 on the attributes models actually reward. Sentiment Snapshot reads how engines describe you when they do mention you, clustered by audience intent.

Where do the prompts come from?

RankBee discovers them for you. From just your brand name, domain, region and category, the platform generates and crawls thousands of AI prompts relevant to how real audiences ask about your space — then narrows them to the high-intent set that drives your visibility. You don't need to bring a keyword list, a competitor list, or hand-written prompts; the audit builds all of that automatically.

What does "invisible to AI" actually mean?

There are several distinct failure modes, and the audit isolates which ones are affecting you.

  • Uncrawlable. Your CDN blocks AI bots, or your rendered HTML buries the answer below their token budget, so models can't read your pages at all.
  • Crawlable but uncited. Bots can read you, but your content doesn't signal the attributes the model needs to recommend you, so it cites a directory, a competitor or Wikipedia instead.
  • Cited but mis-framed. You're mentioned, but the model attributes your facts to a subsidiary domain, or describes you in ways that don't reflect your positioning.
  • Locked out of live retrieval. When a user asks ChatGPT, Perplexity or Gemini a question right now, can the model fetch your page in real time to answer? The crawlability audit tests this end-to-end — many sites pass robots.txt but fail at the CDN or render layer, so live retrieval silently fails.
  • Excluded from training data. Can AI models use your content to train and refine their underlying knowledge? Your robots.txt and bot policies decide whether crawlers like GPTBot, ClaudeBot, Google-Extended and CCBot are allowed to ingest you. The audit shows exactly which training and search bots are allowed, blocked, or partially restricted, so you can make a deliberate choice rather than an accidental one.
How long does it take, and what do I need to provide?

Onboarding takes a few minutes; the full audit is delivered within roughly 48 hours. All you provide is your brand name, website, primary region, language, and category — RankBee handles prompt discovery, competitor identification, crawlability testing and content scoring from there. Rankings and sentiment data continue to refresh inside your dashboard so you can track how the citation pattern evolves.

What happens after the report — does it fix the issues?

The audit diagnoses; remediation happens in the rest of the platform. Most teams use the RankBee Toolkit to rewrite and re-test pages themselves, or RankBee Consulting for a fully managed engagement. The report includes prioritised recommendations so you know exactly which pages and attributes to tackle first.

Can I share the report with my team and stakeholders?

Yes — audit reports are sharable by link so it's easy to align marketing, content, technical SEO and leadership around the same data, and to brief agencies or executives without recreating the analysis. Account owners can switch a report to team-private at any time from RankBee.

How do I get a full audit?
Full audits are available to RankBee subscribers. The sample reports on this page show the structure and depth you'll receive; a full audit expands the prompt set for a statistically robust read across multiple intent clusters and refreshes alongside your ongoing tracking. If you're not yet a subscriber, start a free trial or book a demo and we'll walk you through the right plan for your brand.
Run the same audit on your domain

See exactly how AI engines describe your brand — and where the citation share is leaking.

This audit combined a 25-bot crawl, 5 head-to-head content scoring jobs and a 50-cell AI-engine modelling run to map B2i Technologies' real AI visibility against Q4, Notified, Equisolve and the rest of the IR-website field. Want one for your own brand? RankBee can run the same audit in under 10 minutes.

Prepared by RankBee·rankbee.ai·GAIO-B2I-2026-05