A Rating is a single rater’s verdict on a single comparison. Humans and autousers produce the same shape so downstream analytics doesn’t have to branch onDocumentation Index
Fetch the complete documentation index at: https://docs.autousers.ai/llms.txt
Use this file to discover all available pages before exploring further.
raterType.
Shape
| Field | Set when |
|---|---|
userId | Authenticated user submitted this rating. |
publicRaterId | Anonymous public rater (no account). |
autouserId | An autouser run produced it. |
autouserRunId | The specific run row. |
(userId, publicRaterId) is set on human ratings;
exactly one of (autouserId, autouserRunId) is set on autouser ratings.
Listing
starting_after. See Pagination.
Submitting a human rating via the API
Most ratings come from the dashboard or the public share link. If you need to submit one programmatically (e.g. wiring up a custom rater UI):Agreement
Once you have ratings from ≥3 raters per comparison, agreement metrics become useful. The/agreement endpoint computes Krippendorff α and,
when there are exactly two raters, Cohen κ.
What the numbers mean
| α range | Reading |
|---|---|
| α < 0.4 | No agreement. Treat results as anecdote, not signal. |
| 0.4 ≤ α < 0.6 | Weak. Useful directional, not for promotion gating. |
| 0.6 ≤ α < 0.8 | Acceptable. Most teams ship gates at α ≥ 0.6. |
| α ≥ 0.8 | Strong. Suitable for automated CI gates. |
Caching
Agreement is cached onEvaluation.agreementCache and only recomputed
when the rating count changes. The first call after a new rating is
slightly slower (~100ms) as it warms the cache; subsequent calls are
instant.
Streaming ratings into a warehouse
The shape is stable —dimensionRatings is a JSON map, factors and
openTextResponses are JSON. Subscribe to the rating.created webhook
(see Events) and append rows to BigQuery / Snowflake
as they arrive. Use Autousers-Event-Id as the dedup key on insert.
See the Looker / BigQuery recipe.