Models¶
mcp_zen_of_languages.models
¶
Shared data-model layer for every analyser, detector, and MCP tool.
Every struct that crosses a boundary — between an analyser and the MCP
server, between the detection pipeline and the CLI, or between a detector
and its caller — lives here. The models are built on Pydantic v2 so
they carry automatic validation, JSON-round-trip fidelity, and IDE-friendly
autocompletion, yet several of them expose a dict-like access shim
(__getitem__ / get) so that legacy test suites written against
plain-dict results keep passing without changes.
Design principles:
- Single source of truth — if a field appears in an analysis result, its schema is defined here, not duplicated across analysers.
- Immutable value objects — models are typically frozen after creation;
mutability is reserved for
AnalysisContextin the pipeline. - Composable hierarchy — small models (
Location,Violation) compose into larger ones (Metrics,AnalysisResult,ProjectSummary) without deep inheritance.
See Also
analyzers.base: The BaseAnalyzer that produces AnalysisResult.
analyzers.pipeline: The DetectionPipeline that emits Violation lists.
Classes¶
PerspectiveMode
¶
Bases: StrEnum
High-level analysis perspective selector for CLI and MCP surfaces.
Location
¶
Bases: BaseModel
Pin-point position inside a source file.
Detectors attach a Location to every Violation they emit so
that downstream consumers — IDE extensions, CLI reporters, dashboard
renderers — can jump straight to the offending line. Both fields use
1-based indexing to match what editors display to users.
| ATTRIBUTE | DESCRIPTION |
|---|---|
line |
1-based line number where the issue begins.
TYPE:
|
column |
1-based column offset within that line.
TYPE:
|
Example
loc = Location(line=42, column=5) loc.line 42
See Also
Violation: Carries an optional Location for each detected issue.
PatternFinding: Reuses Location to mark architectural pattern sites.
Violation
¶
Bases: BaseModel
A single zen-principle violation detected in analysed code.
When a detector in the pipeline spots code that breaks an idiomatic
rule — say, a function whose cyclomatic complexity exceeds the
configured ceiling — it creates a Violation carrying the rule
identity, a human-readable explanation, and an optional fix hint.
The model deliberately exposes get() and __getitem__() so that
older test suites that treated results as plain dictionaries keep
working unchanged. New code should use attribute access directly.
| ATTRIBUTE | DESCRIPTION |
|---|---|
principle |
Canonical rule identifier, e.g.
TYPE:
|
severity |
Impact weight from 1 (cosmetic) to 10 (critical defect).
TYPE:
|
message |
One-sentence description of what went wrong.
TYPE:
|
suggestion |
Actionable fix hint, or
TYPE:
|
location |
Source position (line + column), if determinable.
TYPE:
|
files |
Paths involved when a violation spans multiple files.
TYPE:
|
Example
v = Violation( ... principle="zen-of-python.flat", ... severity=6, ... message="Function nesting exceeds 3 levels", ... suggestion="Extract the inner block into a helper function", ... location=Location(line=87, column=1), ... ) v["severity"] 6 v.get("suggestion") 'Extract the inner block into a helper function'
See Also
Location: Positional anchor embedded inside each violation.
AnalysisResult: Aggregates a list of violations for one file.
Functions¶
get
¶
Retrieve a field value by name, falling back to default.
This bridge keeps older test code that calls violation.get("severity")
running without modification. Under the hood it delegates to
getattr, so any Pydantic field is reachable.
| PARAMETER | DESCRIPTION |
|---|---|
key
|
Attribute name matching one of the model fields
(e.g.
TYPE:
|
default
|
Fallback returned when key does not
correspond to an existing attribute. Defaults to
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
object | None
|
object | None: The field value when key exists, otherwise |
object | None
|
default. |
Example
v = Violation(principle="flat", severity=4, message="too deep") v.get("severity") 4 v.get("nonexistent", "fallback") 'fallback'
Source code in src/mcp_zen_of_languages/models.py
CyclomaticBlock
¶
Bases: BaseModel
Complexity measurement for a single callable block.
Radon (or an equivalent metric engine) walks the AST and produces one
CyclomaticBlock per function, method, or top-level code block.
The block records the callable's name, its McCabe complexity score,
and the line where the definition starts — enough for a detector to
decide whether the block breaches the configured threshold.
| ATTRIBUTE | DESCRIPTION |
|---|---|
name |
Qualified name of the function or method.
TYPE:
|
complexity |
McCabe cyclomatic complexity score (≥ 1).
TYPE:
|
lineno |
1-based line where the callable is defined.
TYPE:
|
Example
block = CyclomaticBlock(name="parse_header", complexity=12, lineno=55) block.complexity > 10 True
See Also
CyclomaticSummary: Aggregates multiple blocks into an average score.
CyclomaticSummary
¶
Bases: BaseModel
Aggregate complexity profile for an entire file or snippet.
After every callable has been scored individually, the analyser
rolls the per-block numbers into a CyclomaticSummary. The
average gives a quick health indicator, while the full blocks
list lets detectors flag only the functions that actually exceed the
threshold — avoiding noisy blanket warnings.
| ATTRIBUTE | DESCRIPTION |
|---|---|
blocks |
Ordered list of per-callable complexity measurements.
TYPE:
|
average |
Arithmetic mean of all block complexities in the file.
TYPE:
|
Example
summary = CyclomaticSummary( ... blocks=[ ... CyclomaticBlock(name="read", complexity=3, lineno=10), ... CyclomaticBlock(name="write", complexity=7, lineno=40), ... ], ... average=5.0, ... ) len(summary.blocks) 2
See Also
CyclomaticBlock: The per-function detail record.
Metrics: Wraps this summary alongside maintainability and LOC.
Metrics
¶
Bases: BaseModel
Container for every numeric measurement extracted from a source file.
The analyser's compute_metrics hook populates this model after
parsing is complete. Downstream, the detection pipeline reads these
numbers to decide which zen-principle thresholds have been crossed,
and the MCP server serialises them back to the client alongside the
violation list.
| ATTRIBUTE | DESCRIPTION |
|---|---|
cyclomatic |
Full complexity profile with per-block detail.
TYPE:
|
maintainability_index |
Halstead-derived maintainability score (0-100).
TYPE:
|
lines_of_code |
Physical line count of the analysed source.
TYPE:
|
Example
m = Metrics( ... cyclomatic=CyclomaticSummary(blocks=[], average=0.0), ... maintainability_index=72.5, ... lines_of_code=340, ... ) m.maintainability_index > 65 True
See Also
CyclomaticSummary: Detailed breakdown stored inside cyclomatic.
AnalysisResult: Final output that embeds Metrics.
RulesSummary
¶
Bases: BaseModel
Quick-glance severity histogram for a single analysis run.
After the pipeline finishes, violations are bucketed into four tiers
so that reporters can display a compact traffic-light summary without
iterating the full violation list. The buckets mirror the tier
boundaries defined in zen-config.yaml.
| ATTRIBUTE | DESCRIPTION |
|---|---|
critical |
Number of violations with severity in the 8-10 range.
TYPE:
|
high |
Number of violations with severity in the 6-7 range.
TYPE:
|
medium |
Number of violations with severity in the 4-5 range.
TYPE:
|
low |
Number of violations with severity in the 1-3 range.
TYPE:
|
Example
rs = RulesSummary(critical=1, high=3, medium=5, low=12) rs.critical + rs.high 4
See Also
SeverityCounts: Identical shape used at the project level.
AnalysisResult: Optionally embeds a RulesSummary.
SeverityCounts
¶
Bases: BaseModel
Project-wide severity breakdown across all analysed files.
While RulesSummary counts violations inside a single file,
SeverityCounts rolls those numbers up to the whole repository or
project scope. The ProjectSummary model embeds one instance so
that dashboards can render a top-level health badge in a single read.
| ATTRIBUTE | DESCRIPTION |
|---|---|
critical |
Total critical-severity violations across all files.
TYPE:
|
high |
Total high-severity violations across all files.
TYPE:
|
medium |
Total medium-severity violations across all files.
TYPE:
|
low |
Total low-severity violations across all files.
TYPE:
|
Example
sc = SeverityCounts(critical=0, high=2, medium=14, low=31) sc.critical == 0 True
See Also
RulesSummary: Per-file equivalent with the same bucket shape.
ProjectSummary: Parent model that carries SeverityCounts.
WorstOffender
¶
Bases: BaseModel
Spotlight on the file that accumulated the most violations.
After a repository scan, the server sorts files by violation count and surfaces the top offenders so that developers know where to focus remediation effort first. Each entry records the file path, the raw count, and — when determinable — the language that was used for analysis.
| ATTRIBUTE | DESCRIPTION |
|---|---|
path |
Repository-relative path to the offending file.
TYPE:
|
violations |
Total number of zen-principle violations in the file.
TYPE:
|
language |
Analysis language (
TYPE:
|
Example
wo = WorstOffender(path="src/legacy.py", violations=23, language="python") wo.violations 23
See Also
ProjectSummary: Carries a ranked list of WorstOffender entries.
ProjectSummary
¶
Bases: BaseModel
Bird's-eye health report spanning an entire repository scan.
When the MCP server analyses a multi-file project, it folds individual
AnalysisResult objects into a single ProjectSummary — giving
the caller total file and violation counts, a severity histogram, and
a ranked list of the worst-offending files. This is the payload
behind the analyze_zen_repository tool's summary section.
| ATTRIBUTE | DESCRIPTION |
|---|---|
total_files |
Number of source files that were analysed.
TYPE:
|
total_violations |
Sum of all violations across every file.
TYPE:
|
severity_counts |
Project-wide severity bucket breakdown.
TYPE:
|
worst_offenders |
Files ranked by descending violation count.
TYPE:
|
Example
ps = ProjectSummary( ... total_files=42, ... total_violations=108, ... severity_counts=SeverityCounts(critical=2, high=10, medium=40, low=56), ... worst_offenders=[ ... WorstOffender( ... path="core/engine.py", violations=31, language="python" ... ), ... ], ... ) ps.total_files 42
See Also
SeverityCounts: The histogram embedded inside this summary.
WorstOffender: Individual entries in the offender list.
AnalysisResult: Per-file detail that feeds into this aggregate.
ExternalToolResult
¶
Bases: BaseModel
Execution metadata for one optional external analysis tool.
ExternalAnalysisResult
¶
Bases: BaseModel
Optional external-analysis envelope attached to AnalysisResult.
DogmaFinding
¶
Bases: BaseModel
Aggregated universal-dogma signal derived from one or more violations.
DogmaDomainFinding
¶
Bases: BaseModel
Shared cross-language dogma-domain signal aggregated from dogma findings.
DogmaAnalysis
¶
Bases: BaseModel
Universal-dogma findings attached to a single AnalysisResult.
AnalysisResult
¶
Bases: BaseModel
Primary output produced by every language analyser.
A call to BaseAnalyzer.analyze() returns exactly one
AnalysisResult. It bundles the computed metrics, the full
violation list, and a composite health score into a single,
JSON-serialisable envelope. The MCP server forwards this model
directly to the client; the CLI formats it for terminal display.
Like Violation, this model supports bracket access
(result["violations"]) so that legacy dict-oriented test
assertions continue to pass without rewrites.
| ATTRIBUTE | DESCRIPTION |
|---|---|
language |
Language key used for analysis (e.g.
TYPE:
|
path |
File path, or
TYPE:
|
metrics |
Computed complexity, maintainability, and LOC.
TYPE:
|
violations |
Ordered list of detected zen-principle violations.
TYPE:
|
overall_score |
Composite quality score from 0.0 (worst) to 10.0.
TYPE:
|
rules_summary |
Optional severity histogram for quick triage.
TYPE:
|
Example
result = AnalysisResult( ... language="python", ... path="app/routes.py", ... metrics=Metrics( ... cyclomatic=CyclomaticSummary(blocks=[], average=0.0), ... maintainability_index=80.0, ... lines_of_code=150, ... ), ... violations=[], ... overall_score=9.2, ... ) result["overall_score"] 9.2
See Also
Metrics: The numeric measurements embedded in this result.
Violation: Individual issues inside the violations list.
RepositoryAnalysis: Wraps an AnalysisResult with file metadata.
RepositoryAnalysis
¶
Bases: BaseModel
Per-file wrapper used when scanning an entire repository.
During a repository-wide analysis the server produces one
RepositoryAnalysis per source file, pairing the file's path and
detected language with the full AnalysisResult. Collecting these
into a list gives the MCP client an iterable, JSON-friendly manifest
of every file that was inspected.
| ATTRIBUTE | DESCRIPTION |
|---|---|
path |
Repository-relative path to the analysed file.
TYPE:
|
language |
Language key that the analyser factory resolved.
TYPE:
|
result |
Complete analysis output for this file.
TYPE:
|
Example
entry = RepositoryAnalysis( ... path="lib/parser.py", ... language="python", ... result=analysis_result, ... ) entry.path 'lib/parser.py'
See Also
AnalysisResult: The per-file detail carried inside result.
ProjectSummary: Aggregate statistics derived from all entries.
PatternFinding
¶
Bases: BaseModel
Record of a recognised architectural or design pattern.
Pattern detectors walk the AST looking for well-known structures —
factory functions, observer registrations, strategy switches — and
emit a PatternFinding for each match. The finding carries the
pattern's canonical name, an optional source location, and a free-form
detail string for context that doesn't fit a structured field.
| ATTRIBUTE | DESCRIPTION |
|---|---|
name |
Canonical pattern name (e.g.
TYPE:
|
location |
Source position where the pattern was detected.
TYPE:
|
details |
Free-form context string explaining the match.
TYPE:
|
Example
pf = PatternFinding( ... name="singleton", ... location=Location(line=12, column=1), ... details="Module-level instance guarded by new override", ... ) pf.name 'singleton'
See Also
PatternsResult: Collects multiple findings into one response.
Location: Positional anchor reused here for pattern sites.
PatternsResult
¶
Bases: BaseModel
Bundled response from the architectural-pattern detection pass.
The analyze_zen_patterns MCP tool returns a PatternsResult
containing every pattern that was matched in the target code. An
empty patterns list simply means no known patterns were detected —
it is not an error condition.
| ATTRIBUTE | DESCRIPTION |
|---|---|
patterns |
Ordered list of detected pattern findings.
TYPE:
|
Example
pr = PatternsResult( ... patterns=[ ... PatternFinding(name="observer", details="event bus in signals.py"), ... ] ... ) len(pr.patterns) 1
See Also
PatternFinding: Individual match carried inside the list.
ParserResult
¶
Bases: BaseModel
Opaque wrapper around a language-specific parse tree.
Each analyser's parse_code hook returns a ParserResult so
that the pipeline can carry the tree through the AnalysisContext
without knowing whether the underlying parser produced a tree-sitter
node, a stdlib ast.Module, or something else entirely. The
type tag lets downstream consumers branch on the parser kind when
they need to inspect the tree directly.
| ATTRIBUTE | DESCRIPTION |
|---|---|
type |
Parser backend identifier (e.g.
TYPE:
|
tree |
The actual parse tree object, opaque to the pipeline.
TYPE:
|
Example
import ast pr = ParserResult(type="ast", tree=ast.parse("x = 1")) pr.type 'ast'
Note
tree is typed as object | None because Pydantic cannot
validate arbitrary third-party AST nodes. Treat it as an opaque
handle and cast to the expected type inside language-specific code.
See Also
BaseAnalyzer.parse_code: The hook that creates this wrapper.
DependencyCycle
¶
Bases: BaseModel
A single circular dependency path found in the import graph.
When the dependency analyser builds a directed graph of module
imports and detects a strongly-connected component, it records each
cycle as a DependencyCycle. The cycle list names the modules
in traversal order — the last element implicitly links back to the
first, closing the loop.
| ATTRIBUTE | DESCRIPTION |
|---|---|
cycle |
Module names forming the circular import chain.
TYPE:
|
Example
dc = DependencyCycle(cycle=["auth", "users", "auth"]) "auth" in dc.cycle True
See Also
DependencyAnalysis: Parent model that collects all cycles.
DependencyAnalysis
¶
Bases: BaseModel
Full dependency-graph report for a codebase or file set.
The dependency analyser constructs a directed graph where each node is a module and each edge is an import statement. After graph construction it runs cycle detection (e.g. Tarjan's algorithm) and packages the raw topology together with any circular paths into this model. Detectors use it to flag architectural violations such as layering breaches or tangled dependency clusters.
| ATTRIBUTE | DESCRIPTION |
|---|---|
nodes |
Unique module identifiers present in the import graph.
TYPE:
|
edges |
Directed
TYPE:
|
cycles |
Circular dependency paths detected in the graph.
TYPE:
|
Example
da = DependencyAnalysis( ... nodes=["app", "db", "cache"], ... edges=[("app", "db"), ("app", "cache"), ("cache", "db")], ... cycles=[], ... ) len(da.cycles) 0
See Also
DependencyCycle: Individual cycle record inside cycles.
Violation: The pipeline converts flagged cycles into violations.
BatchViolation
¶
Bases: BaseModel
A zen-principle violation enriched with its source file context.
analyze_batch collects violations from multiple files and adds
the originating file path and language to each entry so that LLM
agents can act on them without needing to correlate back to the
original per-file result.
| ATTRIBUTE | DESCRIPTION |
|---|---|
file |
Filesystem path of the file that produced this violation as returned by the repository analyser (may be absolute).
TYPE:
|
language |
Language key used to analyse the file (e.g.
TYPE:
|
principle |
Canonical rule identifier, e.g.
TYPE:
|
severity |
Impact weight from 1 (cosmetic) to 10 (critical defect).
TYPE:
|
message |
One-sentence description of what went wrong.
TYPE:
|
suggestion |
Actionable fix hint, or
TYPE:
|
location |
Source position (line + column), if determinable.
TYPE:
|
See Also
BatchPage: Carries a list of BatchViolation entries per page.
Violation: The underlying single-file violation model.
BatchHotspot
¶
Bases: BaseModel
File-level hotspot record for the batch summary.
The analyze_batch_summary tool ranks files by their worst
violations and returns the top offenders as BatchHotspot entries.
Each entry captures the file path, language, violation count, and the
maximum severity found so that LLM agents can prioritise remediation
effort without reading every violation.
| ATTRIBUTE | DESCRIPTION |
|---|---|
path |
Filesystem path to the hotspot file as returned by the repository analyser (may be absolute).
TYPE:
|
language |
Language key resolved for this file.
TYPE:
|
violations |
Total number of violations found in the file.
TYPE:
|
top_severity |
Highest severity score among all violations in the file.
TYPE:
|
See Also
BatchSummary: Carries the ranked list of BatchHotspot entries.
BatchPage
¶
Bases: BaseModel
Paginated result returned by a single analyze_batch call.
Each invocation of analyze_batch returns exactly one BatchPage.
The cursor field encodes the position to resume from on the next
call; a None cursor means the caller has received all violations.
The violations list contains only the highest-severity items that
fit within the requested max_tokens budget.
| ATTRIBUTE | DESCRIPTION |
|---|---|
cursor |
Opaque base-64 continuation token, or
TYPE:
|
page |
1-based logical page number derived from a fixed page size of 50 violations, independent of the token-budget-based actual page size.
TYPE:
|
has_more |
TYPE:
|
violations |
Highest-severity violations fitting within the token budget.
TYPE:
|
files_in_page |
Number of distinct files whose violations appear in this page.
TYPE:
|
files_total |
Total number of analysed files across all pages.
TYPE:
|
Example
page = BatchPage( ... cursor=None, ... page=1, ... has_more=False, ... violations=[], ... files_in_page=3, ... files_total=3, ... ) page.has_more False
See Also
BatchViolation: Individual entries inside the violations list.
BatchSummary: One-shot overview companion to this paginated model.
BatchSummary
¶
Bases: BaseModel
One-shot health overview for the analyze_batch_summary tool.
Unlike the paginated BatchPage, a BatchSummary always fits in
a single LLM context window — it trades full violation detail for a
compact health score and the top-5 hotspot files. LLM agents can use
this as a quick triage step before deciding which analyze_batch
pages to retrieve.
| ATTRIBUTE | DESCRIPTION |
|---|---|
health_score |
Project health expressed as a 0-100 score (higher is better).
TYPE:
|
hotspots |
Up to five files ranked by total violation count (with highest top severity used to break ties).
TYPE:
|
total_violations |
Sum of violations across every analysed file.
TYPE:
|
total_files |
Total number of source files that were analysed.
TYPE:
|
Example
summary = BatchSummary( ... health_score=74.0, ... hotspots=[], ... total_violations=142, ... total_files=47, ... ) summary.health_score 74.0
See Also
BatchHotspot: Individual entries in the hotspots list.
BatchPage: Full paginated detail companion to this summary.
LanguagesResult
¶
Bases: BaseModel
Enumeration of every language the server can currently analyse.
The list_zen_languages MCP tool returns a LanguagesResult so
clients can discover which language keys are valid before calling
analysis endpoints. The list is populated at startup from the
AnalyzerFactory registry and stays stable for the lifetime of the
server process.
| ATTRIBUTE | DESCRIPTION |
|---|---|
languages |
Sorted list of supported language identifiers.
TYPE:
|
Example
lr = LanguagesResult(languages=["python", "rust", "typescript"]) "python" in lr.languages True
See Also
AnalyzerFactory: The registry that defines available languages.