Skip to content

eval-output-grading

Domain: eval · Model class: cheap

Use this skill when the user wants to work on Grading AI outputs using rubrics, schema validation, pairwise comparison, and judge models. Triggers include “grade these outputs”, “score AI responses”, “rubric-based grading”. Do NOT use when design the grading criteria first (use core-eval-design).

Grading AI outputs using rubrics, schema validation, pairwise comparison, and judge models. This skill provides structured guidance, references, and worked examples to help produce high-quality, actionable outputs.

  • “grade these outputs”
  • “score AI responses”
  • “rubric-based grading”
  • “validate output schema”
  • “judge model outputs”
  • “pairwise comparison”
  • design the grading criteria first (use core-eval-design)
  • measure variance across runs (use core-variance-analysis)
  1. What is the user’s goal and current state?
  2. What constraints (time, team, compliance) apply?
  3. Are there existing artifacts (specs, code, benchmarks) to reference?
  • evaluation criteria
  • scoring or benchmark framing
  • comparison-ready output
  • decision guidance

eval-design · eval-prompt-bench · eval-variance