Skip to content

eval-prompt-bench

Domain: eval · Model class: cheap

Use this skill when the user wants to work on Running benchmarks to score prompts, compare versions, and detect regressions. Triggers include “benchmark this prompt”, “compare prompt versions”, “detect prompt regressions”. Do NOT use when design the eval first (use core-eval-design).

Running benchmarks to score prompts, compare versions, and detect regressions. This skill provides structured guidance, references, and worked examples to help produce high-quality, actionable outputs.

  • “benchmark this prompt”
  • “compare prompt versions”
  • “detect prompt regressions”
  • “run my eval suite”
  • “score this prompt”
  • design the eval first (use core-eval-design)
  • grade individual outputs (use core-output-grading)
  1. What is the user’s goal and current state?
  2. What constraints (time, team, compliance) apply?
  3. Are there existing artifacts (specs, code, benchmarks) to reference?
  • evaluation criteria
  • scoring or benchmark framing
  • comparison-ready output
  • decision guidance

eval-design · eval-output-grading · eval-variance