Once you have ratings flowing, two AI surfaces summarise them:Documentation Index
Fetch the complete documentation index at: https://docs.autousers.ai/llms.txt
Use this file to discover all available pages before exploring further.
- Standard insights (
/ai-insights) — narrative + structured action items per evaluation or comparison. - Agreement insights (
/agreement-insights) — narrative explaining why raters agreed or disagreed, dimension by dimension.
Standard insights
stale: true. Regenerate:
Agreement insights
A separate row keyed oninsightType: "agreement". Where standard
insights ask “what did raters say?”, agreement insights ask “where
did raters disagree, and why?”.
| α range | Tone |
|---|---|
| α ≥ 0.8 | ”Strong agreement on X, Y, Z. Trust the result.” |
| 0.6 ≤ α < 0.8 | ”Moderate agreement; spotlight on dimension D.” |
| α < 0.6 | ”Raters disagreed substantially on D and E. Consider re-running with a clearer rubric.” |
Costs
Each insight regeneration is a single Gemini completion (~0.005 depending on rating count). Negligible. Doesn’t count against autouser quota; counts against the per-user platform Gemini quota only when on the platform key (User.geminiApiKey BYOK callers don’t count).
Caching
We don’t auto-regenerate. The cache is invalidated only when you POST to the/ai-insights route. This keeps the cost predictable and
prevents accidental cost cascades when a flurry of ratings comes in.
A typical pattern in CI:
Limitations
- The model has access only to the rating data, not the actual designs.
If you want it to reason about visual fidelity, post screenshots as
files and reference them from
customDimensions[].context. - Insights do not include dimensional scores you didn’t actually collect. If you want it to reason about Accessibility, include an Accessibility dimension on the template.
- Output is in English regardless of
Autousers-Locale(currently no i18n on this surface — on the roadmap).