Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.autousers.ai/llms.txt

Use this file to discover all available pages before exploring further.

Once you have ratings flowing, two AI surfaces summarise them:
  1. Standard insights (/ai-insights) — narrative + structured action items per evaluation or comparison.
  2. Agreement insights (/agreement-insights) — narrative explaining why raters agreed or disagreed, dimension by dimension.
Both are generated by Gemini, cached on the row, and stale-detected by rating count.

Standard insights

curl https://app.autousers.ai/api/v1/evaluations/$EVAL_ID/ai-insights \
  -H "Authorization: Bearer $AUTOUSERS_API_KEY"
{
  "id": "ains_clxq3...",
  "evaluationId": "eval_clxq3...",
  "comparisonId": null,
  "insightType": null,
  "recommendation": "Strengthen trust signals on checkout v2 — every persona flagged it.",
  "insights": "Six raters across three personas converged on three themes: ...",
  "structured": {
    "actionItems": [
      {
        "priority": "P1",
        "title": "Add a security badge near the CTA",
        "rationale": "..."
      },
      {
        "priority": "P2",
        "title": "Surface review count on the order summary",
        "rationale": "..."
      }
    ]
  },
  "ratingCountAtGeneration": 24,
  "generatedAt": "2026-05-04T11:05:00.000Z"
}
If the row is stale (i.e. the evaluation has new ratings since generation), the response includes stale: true. Regenerate:
curl -X POST https://app.autousers.ai/api/v1/evaluations/$EVAL_ID/ai-insights \
  -H "Authorization: Bearer $AUTOUSERS_API_KEY"
Per-comparison insights:
curl "https://app.autousers.ai/api/v1/evaluations/$EVAL_ID/ai-insights?comparisonId=$COMP_ID" \
  -H "Authorization: Bearer $AUTOUSERS_API_KEY"

Agreement insights

A separate row keyed on insightType: "agreement". Where standard insights ask “what did raters say?”, agreement insights ask “where did raters disagree, and why?”.
curl https://app.autousers.ai/api/v1/evaluations/$EVAL_ID/agreement-insights \
  -H "Authorization: Bearer $AUTOUSERS_API_KEY"
The narrative is calibrated to the Krippendorff α score:
α rangeTone
α ≥ 0.8”Strong agreement on X, Y, Z. Trust the result.”
0.6 ≤ α < 0.8”Moderate agreement; spotlight on dimension D.”
α < 0.6”Raters disagreed substantially on D and E. Consider re-running with a clearer rubric.”

Costs

Each insight regeneration is a single Gemini completion (~0.0010.001–0.005 depending on rating count). Negligible. Doesn’t count against autouser quota; counts against the per-user platform Gemini quota only when on the platform key (User.geminiApiKey BYOK callers don’t count).

Caching

We don’t auto-regenerate. The cache is invalidated only when you POST to the /ai-insights route. This keeps the cost predictable and prevents accidental cost cascades when a flurry of ratings comes in. A typical pattern in CI:
// 1. Wait for autouser_run.completed webhooks until all expected runs finish.
// 2. Fetch insights once.
// 3. Pin to the report.
const insights = await fetch(
  `https://app.autousers.ai/api/v1/evaluations/${evalId}/ai-insights`,
  { method: "POST", headers: { Authorization: `Bearer ${KEY}` } }
).then((r) => r.json());

writeReport(insights.recommendation, insights.structured.actionItems);

Limitations

  • The model has access only to the rating data, not the actual designs. If you want it to reason about visual fidelity, post screenshots as files and reference them from customDimensions[].context.
  • Insights do not include dimensional scores you didn’t actually collect. If you want it to reason about Accessibility, include an Accessibility dimension on the template.
  • Output is in English regardless of Autousers-Locale (currently no i18n on this surface — on the roadmap).