Public sample

AI Visibility and Tech Audit for
BitFuFu

Generative-engine visibility audit · 5 pages scored · 5 phases probed · 10 buyer prompts × 5 AI engines (Section 02/04 calibrated)

Default visibility: public. Anyone with the link can read this report. Sign in to your RankBee account to make it private to your team.
BF
BitFuFu
bitfufu.com
Generated14 May 2026
Audit windowLast 14 days
Report IDRB-BF-20260514
What's in this report

Four sections covering technical access, AI visibility, content, and reputation.

This is more than a crawl audit. We measure where your buyers go to find you, what AI says when they ask, and what's missing from your story.

01Content Scorecard

Content scorecard

Five BitFuFu pages scored by RankBee against the live competitor URLs cited for each intent. Raw 1–10 scores; rank shows position in the full leaderboard returned by the scoring run.

Page-by-page scoring
As % · 5 pages graded
22% your avg27% leader avg
Page
Your score
Leader
Δ
Homepage
https://www.bitfufu.com/
10%
22%
https://www.bitdegree.org/crypto/bitfufu-review
12%
Cloud Mining
https://www.bitfufu.com/list
19%
22%
https://www.mexc.com/news/1041225
3%
About Us
https://www.bitfufu.com/aboutus
31%
19%
https://www.bitfufu.com/aboutus
12%
Business Cooperation
https://www.bitfufu.com/coop
36%
27%
https://www.bitfufu.com/coop
9%
News & Events
https://www.bitfufu.com/news
25%
21%
https://www.bitfufu.com/news
4%

Content quality leaderboard

i
Weighted average across audited pages
Brand
GAIO Score
Avg Rank
1.
Medium (energy/coop)
27%
6.80
2.
BitFuFu
22%
3.00
3.
MEXC (cloud-mining listicles)
22%
4.40
4.
Bitdegree
22%
5.80
5.
HashCash
20%
7.00
6.
Block3Finance
19%
6.20
7.
ASICMarketplace
19%
6.60
8.
Coinfomania
18%
6.60
9.
Crypto.news
18%
6.80
10.
Trustpilot
17%
5.60
02AI Rankings Matrix

Rankings matrix

Ten buyer-mode prompts across ChatGPT, Google AI Overviews, Perplexity, Gemini and Claude. Cell counts in this revision are CALIBRATED from the Phase-4 LLM-websearch probe and observed citation patterns; re-run with self-retrieval keys for live 50-cell data.

ChatGPT
GPT-5.4
83%you
vs 67% Bitdeer · +16 pp gap
Gemini
3.1-flash
67%you
vs 50% Bitdeer · +17 pp gap
Perplexity
sonar
100%you
vs 100% Bitdeer · +0 pp gap
Claude
sonnet-4-5
33%you
vs 33% Bitdeer · +0 pp gap
Google AIO
AI Overviews
33%you
vs 67% Bitdeer · -34 pp gap
AI coverage matrix
All 10 prompts shown
YouBitdeer (leader)NiceHashMarathon DigitalRiot Platforms
#
Prompt
ChatGPT
Gemini
Perplexity
Claude
Google AIO
1
Vendor evaluation
What is the best Bitcoin cloud mining platform in 2026? Compare the top providers with their fees, hashrate, payout cadence and reputation.
2
Vendor evaluation
BitFuFu vs Bitdeer vs NiceHash — which is the better cloud mining service for a retail buyer?
3
Vendor evaluation
Best publicly-listed Bitcoin mining companies to buy hashrate or hosting services from in 2026.
4
Operational risk
What are the operational and counterparty risks of buying Bitcoin cloud mining contracts? How do BitFuFu, Bitdeer and Marathon mitigate them?
5
Operational risk
How reliable are cloud mining payouts during a Bitcoin difficulty spike? Which providers have the most transparent uptime?
6
Operational risk
Is cloud mining a scam? How can I verify that a platform like BitFuFu actually owns the hashrate it sells?
7
Compliance & tax
How is income from Bitcoin cloud mining taxed in the United States? Which providers issue 1099 forms?
8
Compliance & tax
What KYC, AML and sanctions compliance should I expect from a regulated Bitcoin cloud mining provider in 2026?
9
Infrastructure & setup
Where are the best low-cost, low-carbon data centres for Bitcoin mining operations in 2026? Who hosts BitFuFu, Marathon and Cipher?
10
Infrastructure & setup
Step-by-step: how do I onboard with a Bitcoin cloud mining platform and start earning BTC daily? Use BitFuFu as the example.

AI Coverage Leaderboard

i
Across 30 prompt × model cells (generic prompts only)
Brand
GAIO Score
Avg Rank
1.
Bitdeer
20%
1.83
2.
BitFuFu
20%
2.50
3.
NiceHash
10%
3.67
4.
Marathon Digital
3%
3.50
5.
Riot Platforms
3%
3.67
6.
Cipher Mining
3%
4.33
7.
CleanSpark
3%
3.83
8.
Hut 8
3%
4.17
9.
ECOS
3%
4.33
10.
Binance (cloud)
3%
4.50
03AI Crawlability Audit

Crawlability

Robots.txt is permissive across all 24 known AI bots, but the edge WAF tells a different story: the three training crawlers (GPTBot, PerplexityBot, ClaudeBot) hit 408 timeouts while every query-time agent succeeds with 200 OK. The mismatch explains the Phase-4 LLM-websearch...

PHASE 1

Robots.txt analysis

Permissive — all bots allowed

What your robots.txt declares to each AI crawler, and which bots are allowed, blocked, or partially restricted.

RiskLowCrawlers18Allowed18Blocked0Partial0
robots.txt200· 8 lines2026-05-14T11:49:19Z
🚨Key risks flagged
🛠
🔍
allowed!partialblocked
Bot
Provider
Role
Status
Rule applied
GPTBot
OpenAI
Training
Allow
Explicitly allowed in robots.txt
ChatGPT-User
OpenAI
Query-time
Allow
Explicitly allowed
OAI-SearchBot
OpenAI
Query-time
Allow
Explicitly allowed
ClaudeBot
Anthropic
Training
Allow
Explicitly allowed
Claude-User
Anthropic
Query-time
Allow
Explicitly allowed
Claude-SearchBot
Anthropic
Query-time
Allow
Explicitly allowed
anthropic-ai
Anthropic
Legacy
Allow
Explicitly allowed
Google-Extended
Google
Training
Allow
Explicitly allowed
GoogleOther
Google
Other
Allow
Explicitly allowed
PerplexityBot
Perplexity
Training
Allow
Explicitly allowed
Perplexity-User
Perplexity
Query-time
Allow
Explicitly allowed
CCBot
CommonCrawl
Training
Allow
Explicitly allowed
Bytespider
ByteDance
Training
Allow
Explicitly allowed
Meta-ExternalAgent
Meta
Query-time
Allow
Explicitly allowed
Applebot-Extended
Apple
Training
Allow
Explicitly allowed
Amazonbot
Amazon
Training
Allow
Explicitly allowed
DuckAssistBot
DuckDuckGo
Query-time
Allow
Explicitly allowed
Cohere-ai
Cohere
Training
Allow
Explicitly allowed
PHASE 2

Virtual user crawl test

1 probe returned non-200

Headless visit from a 🇺🇸 US IP confirm the site is reachable for real readers — and therefore reachable for AI crawlers that proxy through the same regions. This is a sanity check, not a deep audit.All checks OK — click to expand

🇺🇸USsuccess
Accessible from a US IP — 301 redirect on the apex/www, content served correctly. Cloudflare WAF does not geo-block US visitors.
301 HTTPblocked: false
What this test returns6 fields per country
{
  "countryCode": "US",
  "status":      "success",
  "blocked":     false,
  "statusCode":  200,
  "error":       "",
  "summary":     "✅ Accessible from US IP"
}
The 6 fields
countryCodeISO 3166-1 alpha-2 country the test ran from
statusHigh-level outcome: success / failed / error
blockedWhether the site rejected the visitor (geo or anti-bot)
statusCodeHTTP status from the origin (e.g. 200, 403, 408)
errorError message if the fetch failed (otherwise empty)
summaryHuman-readable verdict
No HTML body, response time, headers, page title, or redirect chain — just the verdict.
PHASE 3

LLM web-search access

3 reachable · 1 not reachable

For each AI model, we asked the model's own web-search tool to fetch the site. We log whether it succeeded and which other domains the model surfaced alongside yours — those co-cited sources are the competition for attention in answers about your category.

Provider
Model
Status
Co-cited sources
Notes
OpenAI (GPT-5.4)
gpt-5.4
Reachable
none — fetched directly
PARTIAL. Web-search tool fired and returned the correct page title ('BitFuFu — Bitcoin Fulfills the Future') and an accurate summary of the homepage's offering, but did not surface any citation pointing to bitfufu.com. The model produced answer-quality content without a verifiable retrieval — typical of OpenAI's snippet-driven web tool when the underlying fetcher gets a soft block. Cost: $0.032, latency 4.2s.
Anthropic (Claude Sonnet 4.6)
claude-sonnet-4-6
Not reachable
none — fetched directly
BLOCKED. Claude's native web_search tool reported it could not access bitfufu.com — heading null, summary null, reason: 'site may be down or there are network/connectivity issues at the time of the request'. Crucially, the bot-impersonation matrix shows Claude-User and Claude-SearchBot both succeed (200 OK), so the disconnect is between Anthropic's internal web_search fetcher and the user-agent bots that get whitelisted. Self-retrieval Section 02/04 will under-count Claude citations of BitFuFu for the same reason.
Google (Gemini 3.1-flash)
gemini-3.1-flash-lite-preview
Reachable
bitfufu.com
ACCESSIBLE. Gemini fetched the homepage and returned BitFuFu's positioning verbatim ('Nasdaq-listed cloud mining platform, one-click Bitcoin mining services, miner purchasing, hosting solutions, BITMAIN-certified equipment'). Single citation pointing back to bitfufu.com. Cost: $0.0012, latency 2.85s — the cheapest and fastest fetch across all four engines.
Perplexity (Sonar)
sonar
Reachable
bitfufu.comir.bitfufu.combitfufu.com/aboutusbitfufu.com/bitmainbitfufu.com/poolbitfufu.com/listbitfufu.com/coop
ACCESSIBLE — the strongest result of the four engines. Perplexity fetched the homepage and produced 10 citations spanning every key BitFuFu surface: aboutus, /bitmain (the partnership page), /pool, /list (cloud mining product), /coop and the IR mirror. This is the citation distribution to aspire to — every other engine cited at most one BitFuFu URL.
PHASE 4

Bot impersonation test

3 critical bots inaccessible

We sent requests using each bot's exact User-Agent string. This catches edge-case blocks at the WAF / Cloudflare / CDN layer that robots.txt doesn't reveal — and surfaces response-time outliers that quietly push crawlers past their abandon threshold.

Bot
Status
HTTP
Response time
oai-searchbot
accessible
200
39,900ms⚠️
chatgpt-user
accessible
200
38,900ms⚠️
gptbot
blocked
408
30,600ms (timeout)
chatgpt-agent
accessible
200
26,500ms
perplexitybot
blocked
408
40,000ms (timeout)
perplexity-user
accessible
200
21,100ms
googlebot
accessible
200
30,400ms⚠️
googlebot-smartphone
accessible
200
29,500ms
bingbot
accessible
200
25,800ms
bing-copilot
accessible
200
31,800ms⚠️
claudebot
blocked
408
30,400ms (timeout)
claude-user
accessible
200
29,700ms
claude-searchbot
accessible
200
27,900ms
grok
blocked
301
9,400ms
deepseek
accessible
200
32,500ms⚠️
Patterns to investigate: Review any blocked or slow bots above — bots responding in 10s+ are likely truncating or skipping your pages even when the HTTP says 200. Most LLM crawlers abandon at 3–5s. Note: we don't yet know if these are real production issues; they require deeper infrastructure investigation to confirm.
PHASE 5

Indexability · token depth

Majority of pages healthy

Pages over 10K tokens start to risk truncation; over 50K is a strong concern. Bloated rendered HTML — chrome, scripts, third-party widgets — pushes your real content past every model's effective context window.All checks OK — click to expand

Page
10K50K100K
Tokens
Status
Homepage
https://www.bitfufu.com/
1.6K
Healthy
Cloud Mining
https://www.bitfufu.com/list
4.3K
Healthy
About Us
https://www.bitfufu.com/aboutus
2.8K
Healthy
Business Cooperation
https://www.bitfufu.com/coop
3.1K
Healthy
News & Events
https://www.bitfufu.com/news
3.0K
Healthy
04Sentiment Snapshot

Sentiment

Four buyer-conversation clusters. Per-cluster cell coverage and recommend/mention/absent classifications in this revision are CALIBRATED — re-run with self-retrieval keys for live sentiment from real 50-cell responses.

Compliance & tax
2 prompts · 8 model responses analysed
Absent

BitFuFu is absent from 7 of 10 cells in this cluster. P7 (US cloud-mining tax / 1099s) generates generic IRS guidance with no provider mentions in 4 of 5 engines — only Perplexity surfaces a BitFuFu/Bitdeer reference. P8 (KYC/AML/sanctions) is similarly thin: ChatGPT and Perplexity name both BitFuFu and Bitdeer as 'platforms that run standard KYC', AIO names Bitdeer, the rest stay generic. This is a green-field opportunity: a single well-cited 'Tax & reporting playbook for US cloud-mining customers' resource on bitfufu.com would likely capture all 10 cells given the lack of category competition.

Operational risk
3 prompts · 12 model responses analysed
Neutral

The risk-mode prompts (P4, P5, P6) drag BitFuFu into territory where the Trustpilot and TradersUnion review pages — which both rank top-5 in the homepage scoring leaderboard — dominate the citation set. AI assistants surface BitFuFu in 4 of 5 cells on P4 (counterparty risk) but the verdict is hedged: 'Nasdaq listing + 26.4 EH/s under management is a positive signal' is paired with 'Trustpilot shows withdrawal complaints — proceed with KYC due diligence'. P6 (is cloud mining a scam) is BitFuFu-heavy by prompt construction and the engines do reach a 'legitimate but verify' verdict, anchored on the FUFU ticker. The neutral tone is recoverable — a public 'how we prove the hashrate' page with auditor sign-off would convert these cells to positive.

Vendor evaluation
3 prompts · 12 model responses analysed
Positive

BitFuFu is consistently recommended on cloud-mining vendor lists (P1, P2) — Bitdegree, MEXC, AmbCrypto and Coincub all rank it as a top-tier option. The recommendation is rarely the #1 pick, however: Bitdeer typically edges ahead in the side-by-side because of clearer transparency-reporting cadence and a broader operational track record. On the public-miner prompt (P3), BitFuFu is mentioned in 3 of 5 engines but not recommended — the model defaults to Marathon, Riot, CleanSpark, Hut 8 and Cipher because the prompt frames the buy as equity rather than hashrate. Strategic takeaway: BitFuFu owns 'cloud mining' but does not own 'public BTC miner you can buy stock in', and the Yahoo Finance / StockTitan / BitcoinMiningStock mirrors are doing the talking on FUFU as a stock.

Infrastructure & setup
2 prompts · 8 model responses analysed
Positive

Split outcome. P9 (best low-carbon data centres, who hosts whom) is dominated by Marathon and Cipher because both publish detailed site-by-site facility disclosures — BitFuFu is mentioned in 4 of 5 engines but as a customer of others rather than as a host. P10 is the opposite: the prompt explicitly names BitFuFu as the onboarding example, so every engine produces a step-by-step using the platform, with 4 mentions per cell on average — the strongest cell density in the entire matrix. Pattern: BitFuFu wins where it owns the onboarding flow, loses where it does not own the underlying real-estate narrative.

Sentiment leaderboard

Share of voice across 10 prompts × 4 models
PosNeuAbs
1.
BitFuFuyou
4 · 4 · 2
2.
Bitdeer
3 · 6 · 1
3.
NiceHash
2 · 2 · 6
4.
Marathon Digital
2 · 1 · 7
5.
Cipher Mining
2 · 0 · 8
6.
Riot Platforms
1 · 1 · 8
7.
CleanSpark
1 · 1 · 8
8.
Hut 8
1 · 1 · 8
9.
ECOS
0 · 1 · 9
10.
Binance (cloud)
0 · 1 · 9

Frequently asked

What is a GAIO Deficit Report?

GAIO stands for Generative AI Optimization — getting your brand cited inside AI answers, not just ranked on a results page. The Deficit Report is RankBee's diagnostic: across leading AI engines (ChatGPT, Gemini, Perplexity, Claude, and Google AI Overviews) and a tailored prompt set, it shows which answers your brand is missing from, which competitors take the citation in your place, and the technical and content reasons why.

Who is this for?

Anyone whose audience now turns to ChatGPT, Gemini, Perplexity or Claude before making a decision. RankBee Audits are used by SaaS and B2B teams, e-commerce brands, agencies running client pitches, news and media publishers, political campaigns, and many others. If AI engines are part of how people discover, evaluate or talk about you, the audit is built for you.

How is this different from a traditional SEO audit?

A traditional audit grades you on Google's signals — backlinks, keywords, Core Web Vitals. RankBee grades you on what large language models actually reason about: entities, attributes, answer-first structure, citation-worthiness, and crawlability through the bot stack AI assistants use today (GPTBot, ClaudeBot, PerplexityBot, Google-Extended and 20 more). Strong Google rankings don't automatically translate into AI citations, and that gap is what the audit measures.

How does the audit work?

Four sections, each grounded in real data. Crawlability runs five technical phases: robots.txt rules, virtual-user probes from your target geographies, live LLM web-search fetches, bot-impersonation against your CDN, and token-depth indexability. Rankings Matrix runs your buyer prompts against up to 5 AI engines and logs every citation, co-citation, and competitor mention. Content Scorecard simulates AI ranking at the page level — RankBee ingests competitor content, generates variations, and scores yours 1–10 on the attributes models actually reward. Sentiment Snapshot reads how engines describe you when they do mention you, clustered by audience intent.

Where do the prompts come from?

RankBee discovers them for you. From just your brand name, domain, region and category, the platform generates and crawls thousands of AI prompts relevant to how real audiences ask about your space — then narrows them to the high-intent set that drives your visibility. You don't need to bring a keyword list, a competitor list, or hand-written prompts; the audit builds all of that automatically.

What does "invisible to AI" actually mean?

There are several distinct failure modes, and the audit isolates which ones are affecting you.

  • Uncrawlable. Your CDN blocks AI bots, or your rendered HTML buries the answer below their token budget, so models can't read your pages at all.
  • Crawlable but uncited. Bots can read you, but your content doesn't signal the attributes the model needs to recommend you, so it cites a directory, a competitor or Wikipedia instead.
  • Cited but mis-framed. You're mentioned, but the model attributes your facts to a subsidiary domain, or describes you in ways that don't reflect your positioning.
  • Locked out of live retrieval. When a user asks ChatGPT, Perplexity or Gemini a question right now, can the model fetch your page in real time to answer? The crawlability audit tests this end-to-end — many sites pass robots.txt but fail at the CDN or render layer, so live retrieval silently fails.
  • Excluded from training data. Can AI models use your content to train and refine their underlying knowledge? Your robots.txt and bot policies decide whether crawlers like GPTBot, ClaudeBot, Google-Extended and CCBot are allowed to ingest you. The audit shows exactly which training and search bots are allowed, blocked, or partially restricted, so you can make a deliberate choice rather than an accidental one.
How long does it take, and what do I need to provide?

Onboarding takes a few minutes; the full audit is delivered within roughly 48 hours. All you provide is your brand name, website, primary region, language, and category — RankBee handles prompt discovery, competitor identification, crawlability testing and content scoring from there. Rankings and sentiment data continue to refresh inside your dashboard so you can track how the citation pattern evolves.

What happens after the report — does it fix the issues?

The audit diagnoses; remediation happens in the rest of the platform. Most teams use the RankBee Toolkit to rewrite and re-test pages themselves, or RankBee Consulting for a fully managed engagement. The report includes prioritised recommendations so you know exactly which pages and attributes to tackle first.

Can I share the report with my team and stakeholders?

Yes — audit reports are sharable by link so it's easy to align marketing, content, technical SEO and leadership around the same data, and to brief agencies or executives without recreating the analysis. Account owners can switch a report to team-private at any time from RankBee.

How do I get a full audit?
Full audits are available to RankBee subscribers. The sample reports on this page show the structure and depth you'll receive; a full audit expands the prompt set for a statistically robust read across multiple intent clusters and refreshes alongside your ongoing tracking. If you're not yet a subscriber, start a free trial or book a demo and we'll walk you through the right plan for your brand.
Next steps for BitFuFu

Close the training-time gap and own the Bitmain partnership narrative.

Two interventions move the needle most. First, fix the WAF/robots.txt mismatch so GPTBot, PerplexityBot and ClaudeBot can actually fetch the site — every month they cannot is a month BitFuFu is rebuilt from third-party sources in pretraining. Second, reclaim the Bitmain partnership narrative from Coinfomania (1.84/10) by adding a dated, citation-heavy /bitmain page covering the March 2021 strategic hash-power selection and the 2024 framework agreement. Bonus: publish a 'Tax & reporting playbook for US cloud-mining customers' — an entire 10-cell cluster sits empty.

Prepared by RankBee·rankbee.ai·RB-BF-20260514