Skip to main content

Getting Started

Every session starts the same way — Claude needs your project ID first.
You: What projects do I have in Writesonic?
Claude calls get_all_projects and returns your project list. From there, it uses the project_id automatically for subsequent queries.

Share of Voice Tracking

Track how your brand compares to competitors across AI platforms.
You: Show me my share of voice vs competitors on Claude for the last 30 days
Claude calls query_performance_report with:
  • dimensions: ["website"]
  • website_type: "SELF+COMPETITORS"
  • Filter by model = Claude

Citation Gap Analysis

Find where competitors are getting cited but you’re not.
You: Where are my competitors getting cited in AI responses that I’m missing?
Claude calls get_all_citations filtered by competitor websites, then compares against your brand’s citations to identify gaps.

Prompt & Topic Analytics

Understand which search queries and topics drive the most brand mentions.
You: Which prompts drive the most brand mentions? Break it down by AI model and topic
Claude calls query_performance_report with:
  • dimensions: ["prompt", "model"]
  • website_type: "SELF"

Sentiment Monitoring

Track how AI platforms describe your brand over time.
You: Track how AI platforms describe my brand — is sentiment trending positive or negative?
Claude calls query_sentiment_report with:
  • dimensions: ["model", "date"]
  • date_aggregation_interval: "month"
  • website_type: "SELF"

Monthly Performance Report

Generate a comprehensive monthly overview.
You: Give me a monthly performance summary for my brand — visibility, citations, and sentiment trends
Claude chains multiple tool calls:
  1. query_performance_report with dimensions: ["date"], date_aggregation_interval: "month"
  2. query_citation_report with dimensions: ["date"], date_aggregation_interval: "month"
  3. query_sentiment_report with dimensions: ["date"], date_aggregation_interval: "month"

Market-Specific Analysis

Compare performance across geographic regions.
You: How does my brand visibility differ between US, UK, and Germany?
Claude calls query_performance_report with:
  • dimensions: ["market"]
  • website_type: "SELF"

Platform Deep Dive

Focus on a specific AI platform.
You: Give me a detailed breakdown of my brand’s presence on Perplexity — citations, sentiment, and top prompts
Claude chains:
  1. query_performance_report filtered by model = Perplexity
  2. query_citation_report with dimensions: ["domain"], filtered by Perplexity
  3. query_sentiment_report filtered by Perplexity
  4. get_all_prompts sorted by visibility, filtered by Perplexity

Tips for Effective Queries

  • Be specific about time ranges — Claude can filter by date dimensions with day/week/month granularity
  • Name the AI platform if you want platform-specific data (e.g., “on ChatGPT”, “across Perplexity”)
  • Ask for comparisons — Claude can cross-tabulate multiple dimensions (e.g., model + date, topic + market)
  • Request “my brand” vs “competitors” — Claude uses website_type to scope data appropriately