Public sample

AI Visibility and Tech Audit for
Fundraisly

Generative AI visibility audit covering 4 strategic pages, 10 buyer prompts × 5 AI engines (calibrated against 16 named competitors), a full bot-impersonation matrix across 15 crawlers, and live LLM websearch probes from OpenAI, Anthropic, Gemini, and Perplexity.

Default visibility: public. Anyone with the link can read this report. Sign in to your RankBee account to make it private to your team.
FR
Fundraisly
fundraisly.com
Generated2026-05-14
Audit windowLast 14 days
Report IDgaio-1778759526041-cldqrg65u
What's in this report

Four sections covering technical access, AI visibility, content, and reputation.

This is more than a crawl audit. We measure where your buyers go to find you, what AI says when they ask, and what's missing from your story.

Content scorecard
SECTION 01
How Fundraisly's pages stack up against live AI-cited competitors

RankBee scored four key Fundraisly pages against the actual URLs AI assistants cite for those buyer queries. Fundraisly wins #1 on every page — but absolute scores are 2.03–3.46/10 and the lead over qubit.capital, openvc.app, hustlefund.vc, and affinity.co is razor-thin.

27%of 100
Rank #1, +0 vs leader
Rankings matrix
SECTION 02
Where Fundraisly shows up across 50 AI buyer-journey cells

Ten buyer prompts × five AI engines (ChatGPT, Gemini, Perplexity, Claude, Google AI Overviews), calibrated across 16 named competitors and grounded in RankBee's LLM websearch (4/4 providers can fetch the homepage) and the co-citation behaviour observed in the live content-scoring runs.

16%of 45 prompt × model cells
Rank #5 · 16% cited
Crawlability & access
SECTION 03
Whether AI crawlers, browsers, and search bots can actually reach you

RankBee's 5-phase crawl: robots.txt allows every major AI bot, US virtual user passes, all 15/15 bot impersonations return 200, and all four LLM websearch providers can fetch the homepage. The only remaining risk surface is response-time slowness on Framer (32–37s per bot) and Perplexity brand confusion with raisely.com.

4 / 4pages reachable
1 partial
Sentiment
SECTION 04
What AI tells buyers about Fundraisly across 4 conversation clusters

Vendor evaluation, operational/risk, compliance & privacy, and infrastructure & setup. Calibrated against the LLM websearch responses (which praised the offer but on two of four providers failed to cite the live URL) and the named co-citations in the RankBee scoring runs.

1of 4 clusters need attention
Mixed, brand confusion
01Content Scorecard

Content scorecard

Four strategic Fundraisly pages scored against the live AI-cited competitive set for their buyer queries. Fundraisly tops every leaderboard but everyone in this category scores under 3.5/10 — the depth gap is real but the moat is shallow.

Page-by-page scoring
As % · 4 pages graded
27% your avg27% leader avg
Page
Your score
Leader
Δ
Homepage
https://fundraisly.com/
20%
19%
https://qubit.capital/blog/investor-outreach-tools-for-startups
1%
Service plan
https://fundraisly.com/service-plan
26%
20%
https://freestartupfunding.com/costs/fundraising-consultant
6%
Terms of Service
https://fundraisly.com/terms-of-service
35%
16%
https://esign.com/employment/independent-contractor/consulting/fundraising/
19%
Privacy Policy
https://fundraisly.com/privacy-policy
25%
21%
https://www.affinity.co/blog/crm-security-best-practices-vc
4%

Content quality leaderboard

i
Weighted average across audited pages
Brand
GAIO Score
Avg Rank
1.
Fundraisly
27%
1.00
2.
Affinity
21%
4.00
3.
Qubit Capital
19%
2.50
4.
Hustle Fund
17%
3.50
5.
Peony
17%
4.00
6.
OpenVC
17%
3.75
7.
Spectup
14%
4.00
8.
Visible.vc
12%
3.75
02AI Rankings Matrix

Rankings matrix

Ten buyer prompts × five AI engines × 16 named competitors. Calibrated against RankBee's LLM websearch (all four providers fetched the homepage correctly) and the actual co-citation pattern observed in the live content-scoring runs.

ChatGPT
GPT-5.4
44%you
vs 44% Qubit Capital · +0 pp gap
Gemini
Gemini 3.1
56%you
vs 56% Qubit Capital · +0 pp gap
Perplexity
Sonar
78%you
vs 100% Qubit Capital · -22 pp gap
Claude
Sonnet 4.6
22%you
vs 11% Qubit Capital · +11 pp gap
Google AIO
AI Overviews
22%you
vs 0% Qubit Capital · +22 pp gap
AI coverage matrix
All 10 prompts shown
YouQubit Capital (leader)OpenVCFoundersuiteAngelList
#
Prompt
ChatGPT
Gemini
Perplexity
Claude
Google AIO
1
Vendor evaluation
What's the best fundraising platform or service for early-stage startup founders looking to book investor meetings in 2026?
2
Vendor evaluation
Compare Fundraisly vs Foundersuite vs OpenVC vs Visible.vc vs Cabal — which is best for seed-stage founders trying to get VC meetings?
3
Vendor evaluation
What are the top done-for-you VC outreach services that book qualified investor meetings on founders' calendars for seed and Series A startups?
4
Operational / risk
How can a startup founder get 20 to 40 qualified investor meetings in 90 days without burning out the team?
5
Operational / risk
What are the risks of using automated cold outreach to VCs — does it hurt a founder's reputation with investors?
6
Operational / risk
How do founders systematically map warm introductions and connector paths to VCs for a seed or Series A round?
7
Compliance & privacy
Are there legal, securities, or solicitation issues with hiring a service to contact venture capital investors on a founder's behalf in the US?
8
Compliance & privacy
What data privacy and GDPR considerations should founders weigh when using a fundraising CRM that stores investor and pipeline data?
9
Infrastructure & setup
Which investor databases and outreach tools should a startup founder combine to run a comprehensive seed-round VC outreach campaign?
10
Infrastructure & setup
Walk me through setting up a startup fundraising outreach campaign step-by-step: what tools, sequence, and deliverables are needed?

AI Coverage Leaderboard

i
Across 45 prompt × model cells (generic prompts only)
Brand
GAIO Score
Avg Rank
1.
Qubit Capital
20%
6.44
2.
OpenVC
18%
2.44
3.
Foundersuite
18%
4.33
4.
AngelList
18%
4.11
5.
Visible.vc
16%
3.44
6.
Fundraisly
16%
6.11
7.
Crunchbase
16%
5.56
8.
Hustle Fund
16%
5.67
9.
FundingStack
13%
8.78
10.
DocSend
11%
8.11
11.
Cabal
11%
9.00
12.
PitchCalls
11%
8.00
13.
Spectup
11%
8.78
14.
DeckToVC
11%
9.89
15.
Peony
9%
9.22
16.
Affinity
7%
9.44
03AI Crawlability Audit

Crawlability & access

RankBee's 5-phase audit: robots.txt is fully permissive, US virtual user passes 200, all four LLM websearch providers can fetch the homepage, and all 15/15 bot impersonations return 200. Framer's response times of 32–37 seconds for the AI-crawler set risk citation drop-off, and...

PHASE 1

Robots.txt analysis

Permissive — all bots allowed

What your robots.txt declares to each AI crawler, and which bots are allowed, blocked, or partially restricted.All checks OK — click to expand

RiskLowCrawlers19Allowed19Blocked0Partial0
robots.txt200· 4 lines2026-05-14T11:52:11Z
🚨Key risks flagged
🛠
🔍
allowed!partialblocked
Bot
Provider
Role
Status
Rule applied
GPTBot
OpenAI / training
Allow
Allow
Permissive; site allows full crawl
ChatGPT-User
OpenAI / browse
Allow
Allow
Permissive; site allows ChatGPT-User on-demand fetches
OAI-SearchBot
OpenAI / search index
Allow
Allow
Permissive; site allows the OpenAI search index crawler
ClaudeBot
Anthropic / training
Allow
Allow
Permissive; site allows ClaudeBot to crawl training data
Claude-User
Anthropic / browse
Allow
Allow
Permissive; site allows Claude on-demand fetches
Claude-SearchBot
Anthropic / search idx
Allow
Allow
Permissive; site allows Anthropic search index crawler
anthropic-ai
Anthropic / legacy
Allow
Allow
Permissive; site allows the legacy Anthropic identifier
Google-Extended
Google / AI training
Allow
Allow
Permissive; site allows Google's AI training opt-out bot to crawl
GoogleOther
Google / experimental
Allow
Allow
Permissive; site allows Google's catch-all experimental crawler
PerplexityBot
Perplexity / training
Allow
Allow
Permissive; site allows PerplexityBot
Perplexity-User
Perplexity / browse
Allow
Allow
Permissive; site allows Perplexity-User on-demand fetches
CCBot
Common Crawl
Allow
Allow
Permissive; site allows Common Crawl
Bytespider
ByteDance / Doubao
Allow
Allow
Permissive; site allows ByteDance / Doubao crawler
Meta-ExternalAgent
Meta / Llama
Allow
Allow
Permissive; site allows Meta's AI training agent
Applebot-Extended
Apple / training
Allow
Allow
Permissive; site allows Apple's AI training opt-out bot
Amazonbot
Amazon / Alexa+Q
Allow
Allow
Permissive; site allows Amazon's bot
DuckAssistBot
DuckDuckGo / AI
Allow
Allow
Permissive; site allows DuckAssist
Diffbot
Diffbot / analysis
Allow
Allow
Permissive; site allows Diffbot
Cohere-ai
Cohere
Allow
Allow
Permissive; site allows Cohere's crawler
PHASE 2

Virtual user crawl test

1 probe — 200 OK

Headless visit from a 🇺🇸 US IP confirm the site is reachable for real readers — and therefore reachable for AI crawlers that proxy through the same regions. This is a sanity check, not a deep audit.All checks OK — click to expand

🇺🇸USsuccess
Accessible from a US residential IP. Page renders, HTML returns 200, no Cloudflare/Framer challenge.
200 HTTPblocked: false
What this test returns6 fields per country
{
  "countryCode": "US",
  "status":      "success",
  "blocked":     false,
  "statusCode":  200,
  "error":       "",
  "summary":     "✅ Accessible from US IP"
}
The 6 fields
countryCodeISO 3166-1 alpha-2 country the test ran from
statusHigh-level outcome: success / failed / error
blockedWhether the site rejected the visitor (geo or anti-bot)
statusCodeHTTP status from the origin (e.g. 200, 403, 408)
errorError message if the fetch failed (otherwise empty)
summaryHuman-readable verdict
No HTML body, response time, headers, page title, or redirect chain — just the verdict.
PHASE 3

LLM web-search access

4 of 4 reachable

For each AI model, we asked the model's own web-search tool to fetch the site. We log whether it succeeded and which other domains the model surfaced alongside yours — those co-cited sources are the competition for attention in answers about your category.All checks OK — click to expand

Provider
Model
Status
Co-cited sources
Notes
OpenAI (GPT-5.4)
gpt-5.4
Reachable
none — fetched directly
Web-search tool fired and returned content matching the live homepage — but ChatGPT did not cite fundraisly.com directly. It returned a fabricated heading ('Founder? Fundraising Sucks. So We Automated It.') that does not appear on the live page, which is the classic 'fetched-but-paraphrased' pattern that hurts citation share even when access is fine.
Anthropic (Claude Sonnet 4.6)
claude-sonnet-4-6
Reachable
none — fetched directly
Web-search tool fired and returned the correct live H1 ('Stop wasting 195+ hours on fundraising preparation'). Claude does not surface citation URLs in this probe mode, so the homepage was reached but no anchor URL is presented to the user. Net effect on user-visible AI answers is positive content match, zero clickable citation.
Gemini (3.1 Flash Lite)
gemini-3.1-flash-lite-preview
Reachable
fundraisly.com
Cleanest result of the four. Gemini fetched the page (URL_RETRIEVAL_STATUS_SUCCESS), surfaced the correct title, and cited fundraisly.com as the source. This is the only provider where a user sees the brand URL in the citation block.
Perplexity (Sonar)
sonar
Reachable
fundraisly.comyoutube.com/watch?v=pkRCPWSh4mEhumantic.ai/public-profile/dave-waiserapp.fundraisly.com/loginraisely.com
Perplexity cited the live homepage and produced an accurate summary — but its citation panel also surfaces raisely.com, a completely different fundraising platform (non-profit donor management). This is the brand-confusion failure mode that recurs in the content-scoring leaderboards: Perplexity treats 'fundrai*' brands as substitutable. Fixing this requires denser owned-content authority signals (clearly disambiguated 'About', 'Brand', or 'Press' pages) rather than a robots/WAF change.
PHASE 4

Bot impersonation test

14 slow

We sent requests using each bot's exact User-Agent string. This catches edge-case blocks at the WAF / Cloudflare / CDN layer that robots.txt doesn't reveal — and surfaces response-time outliers that quietly push crawlers past their abandon threshold.

Bot
Status
HTTP
Response time
oai-searchbot
accessible
200
34,900ms⚠️
chatgpt-user
accessible
200
34,600ms⚠️
gptbot
accessible
200
32,800ms⚠️
chatgpt-agent
accessible
200
14,000ms⚠️
perplexitybot
accessible
200
34,100ms⚠️
perplexity-user
accessible
200
33,800ms⚠️
googlebot
accessible
200
36,600ms⚠️
googlebot-smartphone
accessible
200
35,200ms⚠️
bingbot
accessible
200
34,800ms⚠️
bing-copilot
accessible
200
33,700ms⚠️
claudebot
accessible
200
37,400ms⚠️
claude-user
accessible
200
37,200ms⚠️
claude-searchbot
accessible
200
32,200ms⚠️
grok
accessible
200
1,900ms
deepseek
accessible
200
36,600ms⚠️
Patterns to investigate: Review any blocked or slow bots above — bots responding in 10s+ are likely truncating or skipping your pages even when the HTTP says 200. Most LLM crawlers abandon at 3–5s. Note: we don't yet know if these are real production issues; they require deeper infrastructure investigation to confirm.
PHASE 5

Indexability · token depth

Majority of pages healthy

Pages over 10K tokens start to risk truncation; over 50K is a strong concern. Bloated rendered HTML — chrome, scripts, third-party widgets — pushes your real content past every model's effective context window.All checks OK — click to expand

Page
10K50K100K
Tokens
Status
Homepage
https://fundraisly.com/
1.4K
Healthy
Service plan
https://fundraisly.com/service-plan
3.5K
Healthy
Terms of Service
https://fundraisly.com/terms-of-service
9.4K
Healthy
Privacy Policy
https://fundraisly.com/privacy-policy
9.7K
Healthy
04Sentiment Snapshot

Sentiment

What AI tells buyers about Fundraisly across four conversation clusters, calibrated from RankBee's LLM websearch probes (where 4/4 providers returned positive, on-message homepage summaries) and the co-citation pattern observed in the content-scoring runs.

Compliance & privacy
2 prompts · 8 model responses analysed
Absent

The audit's clearest gap. Fundraisly does not surface in either compliance prompt — P7 (legal/securities risk of outreach services) returns generic regulatory citations with OpenVC, AngelList, Crunchbase, and Qubit Capital barely making the cut. P8 (GDPR for fundraising CRMs) is dominated by Affinity's CRM security blog, Visible.vc's data-room guide, DocSend's compliance docs, Foundersuite, and AngelList — every named compliance-credible competitor in the category. Calibrated cell coverage: 0 of 10 (0%), 0 mentions. Fixing this requires either an owned trust/security page (the privacy policy currently scores 2.5/10 — better than Affinity at 2.09, but the content is generic legal boilerplate without audience-segmented operational detail) or commissioned third-party coverage on solicitation/compliance for outreach services.

Operational / risk
3 prompts · 12 model responses analysed
Neutral

Mixed picture. On P4 ('how to get 20–40 investor meetings in 90 days') — Fundraisly's exact value prop — every engine cites the brand: positive across all 5 model responses, but Hustle Fund (the canonical tactical content for this prompt) also takes 5 cells with deeper templates. On P5 ('risks of automated cold outreach') only Perplexity surfaces Fundraisly, and as cautionary context; Visible.vc owns this prompt with 5 cells and authoritative voice. P6 ('map warm intros') is owned by Cabal and Affinity, who are the explicit warm-intro tools in this space — Fundraisly appears in 3 of 5 cells but as a follower, not the recommended primary. Calibrated cell coverage: 9 of 15 (60%), 9 mentions.

Infrastructure & setup
2 prompts · 8 model responses analysed
Neutral

Adjacent visibility but not dominant. P9 ('investor databases and outreach tools to combine') is an OpenVC/AngelList/Crunchbase/Visible.vc/Foundersuite/DocSend lock — six brands hold all 5 cells each. Fundraisly surfaces in 2 of 5 cells as 'one option among many', not the canonical answer. P10 ('step-by-step outreach campaign') similarly returns the OpenVC + Visible.vc + Foundersuite + AngelList + Crunchbase quintet with Fundraisly mentioned in 2 of 5 cells. Calibrated cell coverage: 4 of 10 (40%), 4 mentions. The narrative is fine; the volume is below the audit average and the canonical-answer brands have a 5–6 year head start in indexed content.

Vendor evaluation
3 prompts · 12 model responses analysed
Positive

AI engines describe Fundraisly accurately when the brand is named directly (P2: vs. Foundersuite / OpenVC / Visible.vc / Cabal). Across the three vendor-evaluation prompts, Fundraisly lands a positive mention 2 of 3 times — the third (P1: 'best fundraising platform 2026') is dominated by older, well-cited brands (Visible.vc, OpenVC, Foundersuite, AngelList) where Fundraisly only surfaces in Gemini and Perplexity (the two providers with strong web-search citations). PitchCalls and DeckToVC also appear in the DFY-specific P3 as direct functional comparables. Calibrated cell coverage: 12 of 15 (80%), 23 mentions. The narrative is on-message — 'puts investor meetings on your calendar', '20–40 meetings in 90 days' — and matches the homepage H1 directly, which is the upside of having a single dense landing page.

Sentiment leaderboard

Share of voice across 10 prompts × 4 models
PosNeuAbs
1.
OpenVC
7 · 2 · 1
2.
Visible.vc
7 · 1 · 2
3.
Foundersuite
5 · 4 · 1
4.
AngelList
5 · 4 · 1
5.
Crunchbase
4 · 4 · 2
6.
Fundraislyyou
3 · 5 · 2
7.
DocSend
3 · 3 · 4
8.
Hustle Fund
2 · 6 · 2
9.
Cabal
2 · 4 · 4
10.
Affinity
2 · 2 · 6
11.
Qubit Capital
1 · 9 · 0
12.
PitchCalls
1 · 5 · 4
13.
DeckToVC
1 · 4 · 5
14.
Spectup
0 · 6 · 4
15.
FundingStack
0 · 6 · 4
16.
Peony
0 · 4 · 6

Frequently asked

What is a GAIO Deficit Report?

GAIO stands for Generative AI Optimization — getting your brand cited inside AI answers, not just ranked on a results page. The Deficit Report is RankBee's diagnostic: across leading AI engines (ChatGPT, Gemini, Perplexity, Claude, and Google AI Overviews) and a tailored prompt set, it shows which answers your brand is missing from, which competitors take the citation in your place, and the technical and content reasons why.

Who is this for?

Anyone whose audience now turns to ChatGPT, Gemini, Perplexity or Claude before making a decision. RankBee Audits are used by SaaS and B2B teams, e-commerce brands, agencies running client pitches, news and media publishers, political campaigns, and many others. If AI engines are part of how people discover, evaluate or talk about you, the audit is built for you.

How is this different from a traditional SEO audit?

A traditional audit grades you on Google's signals — backlinks, keywords, Core Web Vitals. RankBee grades you on what large language models actually reason about: entities, attributes, answer-first structure, citation-worthiness, and crawlability through the bot stack AI assistants use today (GPTBot, ClaudeBot, PerplexityBot, Google-Extended and 20 more). Strong Google rankings don't automatically translate into AI citations, and that gap is what the audit measures.

How does the audit work?

Four sections, each grounded in real data. Crawlability runs five technical phases: robots.txt rules, virtual-user probes from your target geographies, live LLM web-search fetches, bot-impersonation against your CDN, and token-depth indexability. Rankings Matrix runs your buyer prompts against up to 5 AI engines and logs every citation, co-citation, and competitor mention. Content Scorecard simulates AI ranking at the page level — RankBee ingests competitor content, generates variations, and scores yours 1–10 on the attributes models actually reward. Sentiment Snapshot reads how engines describe you when they do mention you, clustered by audience intent.

Where do the prompts come from?

RankBee discovers them for you. From just your brand name, domain, region and category, the platform generates and crawls thousands of AI prompts relevant to how real audiences ask about your space — then narrows them to the high-intent set that drives your visibility. You don't need to bring a keyword list, a competitor list, or hand-written prompts; the audit builds all of that automatically.

What does "invisible to AI" actually mean?

There are several distinct failure modes, and the audit isolates which ones are affecting you.

  • Uncrawlable. Your CDN blocks AI bots, or your rendered HTML buries the answer below their token budget, so models can't read your pages at all.
  • Crawlable but uncited. Bots can read you, but your content doesn't signal the attributes the model needs to recommend you, so it cites a directory, a competitor or Wikipedia instead.
  • Cited but mis-framed. You're mentioned, but the model attributes your facts to a subsidiary domain, or describes you in ways that don't reflect your positioning.
  • Locked out of live retrieval. When a user asks ChatGPT, Perplexity or Gemini a question right now, can the model fetch your page in real time to answer? The crawlability audit tests this end-to-end — many sites pass robots.txt but fail at the CDN or render layer, so live retrieval silently fails.
  • Excluded from training data. Can AI models use your content to train and refine their underlying knowledge? Your robots.txt and bot policies decide whether crawlers like GPTBot, ClaudeBot, Google-Extended and CCBot are allowed to ingest you. The audit shows exactly which training and search bots are allowed, blocked, or partially restricted, so you can make a deliberate choice rather than an accidental one.
How long does it take, and what do I need to provide?

Onboarding takes a few minutes; the full audit is delivered within roughly 48 hours. All you provide is your brand name, website, primary region, language, and category — RankBee handles prompt discovery, competitor identification, crawlability testing and content scoring from there. Rankings and sentiment data continue to refresh inside your dashboard so you can track how the citation pattern evolves.

What happens after the report — does it fix the issues?

The audit diagnoses; remediation happens in the rest of the platform. Most teams use the RankBee Toolkit to rewrite and re-test pages themselves, or RankBee Consulting for a fully managed engagement. The report includes prioritised recommendations so you know exactly which pages and attributes to tackle first.

Can I share the report with my team and stakeholders?

Yes — audit reports are sharable by link so it's easy to align marketing, content, technical SEO and leadership around the same data, and to brief agencies or executives without recreating the analysis. Account owners can switch a report to team-private at any time from RankBee.

How do I get a full audit?
Full audits are available to RankBee subscribers. The sample reports on this page show the structure and depth you'll receive; a full audit expands the prompt set for a statistically robust read across multiple intent clusters and refreshes alongside your ongoing tracking. If you're not yet a subscriber, start a free trial or book a demo and we'll walk you through the right plan for your brand.
Next step

Close the 50-cell gap and reclaim citation share from Raisely

Fundraisly wins every page-level head-to-head — but the entire 'done-for-you VC outreach' category scores under 3.5/10, and AI engines still confuse the brand with raisely.com. Across 16 named competitors the brand sits 5th overall, ahead of Crunchbase, Qubit Capital, HustleFund, DocSend, Cabal, Affinity and the long tail — but Visible.vc, OpenVC, Foundersuite, and AngelList have a 5–6 year content lead. The next 90 days of GAIO work are about turning the ToS and Privacy depth into TL;DR-style discoverable answers, publishing a sample-packages pricing surface to match the DeckToVC/Valley/Qubit pattern, and adding owned content on compliance, warm-intro mechanics, and the 8-week outreach playbook — the three prompts where Fundraisly is currently absent or trailing.

Prepared by RankBee·rankbee.ai·gaio-1778759526041-cldqrg65u