Skip to content

Analyzers

Core analyzers

mcp_zen_of_languages.analyzers.base

Analyzer architecture built on Template Method and Strategy patterns.

This module is the architectural backbone of the zen analysis engine. It replaces a monolithic 400-line analyze() method with a composable pipeline of focused components, each governed by a well-known design pattern:

  • Template Method (BaseAnalyzer) defines the invariant analysis skeleton — parse → metrics → detect → result — while language subclasses override only the hooks that differ (parse_code, compute_metrics, build_pipeline).
  • Strategy (ViolationDetector) encapsulates each detection algorithm behind a uniform detect() contract, so detectors can be swapped, reordered, or shared across languages without touching the pipeline.
  • Context Object (AnalysisContext) carries all intermediate state — AST, metrics, dependency graph — as a single Pydantic model, eliminating parameter explosion and giving every detector type-safe access to upstream results.
  • Pipeline / Chain of Responsibility (DetectionPipeline) runs detectors sequentially, isolating failures so one broken detector never silences the rest.

Data flows through these layers as::

server / cli
  → AnalyzerFactory.create(language)
    → BaseAnalyzer.analyze(code)
      → parse_code()          # Template hook
      → compute_metrics()     # Template hook
      → DetectionPipeline.run()
        → ViolationDetector.detect()  x N
      → _build_result()
    → AnalysisResult
Adding a new language

Subclass BaseAnalyzer, implement three hooks, and register language-specific detectors — the base class handles everything else.

Classes

AnalyzerConfig

Bases: DetectorConfig

Baseline thresholds shared by every language analyzer.

AnalyzerConfig acts as the root configuration for the analysis engine. It inherits discriminated-union plumbing from DetectorConfig and adds the knobs that every language needs — complexity caps, length limits, and feature flags.

Language-specific subclasses (e.g. PythonAnalyzerConfig) extend this with additional fields without repeating the common ones.

ATTRIBUTE DESCRIPTION
type

Discriminator fixed to "analyzer_defaults".

TYPE: Literal['analyzer_defaults']

max_cyclomatic_complexity

Upper bound on average cyclomatic complexity before a violation is emitted (1-50, default 10).

TYPE: int

max_nesting_depth

Maximum permitted indentation nesting depth (1-10, default 3).

TYPE: int

max_function_length

Line count ceiling for a single function body (10-500, default 50).

TYPE: int

max_class_length

Line count ceiling for a class definition (1-1000, default 300).

TYPE: int

max_magic_methods

Allowed dunder-method count per class (0-50, default 3).

TYPE: int

severity_threshold

Minimum severity a violation must reach to appear in final results (1-10, default 5).

TYPE: int

enable_dependency_analysis

When True, the analyzer builds a dependency graph before running detectors.

TYPE: bool

enable_pattern_detection

When True, the RulesAdapter is invoked to merge rule-derived violations.

TYPE: bool

Note

Field boundaries are enforced by Pydantic ge / le constraints — passing an out-of-range value raises a ValidationError at construction time, not at analysis time.

See Also

PythonAnalyzerConfig: Python-specific extensions. BaseAnalyzer: Consumer that reads these thresholds during analysis.

PythonAnalyzerConfig

Bases: AnalyzerConfig

Python-specific analyzer settings layered on top of the base defaults.

Python analysis adds checks that only make sense for CPython semantics: magic-method proliferation (__init__, __str__, …) and God Class detection. These flags let callers selectively enable or tune those detectors without affecting the shared thresholds inherited from AnalyzerConfig.

ATTRIBUTE DESCRIPTION
detect_magic_methods

Enable the magic-method-count detector that flags classes overloading too many dunder protocols.

TYPE: bool

detect_god_classes

Enable the God Class detector that identifies classes with excessive responsibility.

TYPE: bool

max_magic_methods

Ceiling on dunder methods per class before a violation is raised (overrides the base default of 3).

TYPE: int

See Also

AnalyzerConfig: Inherited base thresholds.

TypeScriptAnalyzerConfig

Bases: AnalyzerConfig

TypeScript-specific analyzer settings.

TypeScript analysis extends the base thresholds with checks targeting the type system: unrestrained any usage and overly-generic type parameter lists. These flags complement — rather than replace — the complexity and length limits inherited from AnalyzerConfig.

ATTRIBUTE DESCRIPTION
detect_any_usage

Enable detection of any type annotations that bypass the TypeScript type checker.

TYPE: bool

max_type_parameters

Maximum generic type parameters allowed on a single declaration before a violation is emitted.

TYPE: int

See Also

AnalyzerConfig: Inherited base thresholds.

RustAnalyzerConfig

Bases: AnalyzerConfig

Rust-specific analyzer settings.

Rust analysis adds safety-oriented checks that are unique to the language's ownership model: unwrap() calls that can panic at runtime and unsafe blocks that opt out of borrow-checker guarantees. Inherits shared complexity limits from AnalyzerConfig.

ATTRIBUTE DESCRIPTION
detect_unwrap_usage

Enable detection of .unwrap() calls on Result and Option types that risk runtime panics.

TYPE: bool

detect_unsafe_blocks

Enable detection of unsafe { … } blocks that bypass Rust's memory-safety guarantees.

TYPE: bool

See Also

AnalyzerConfig: Inherited base thresholds.

AstStatus

Bases: StrEnum

Status of AST availability for the current analysis run.

AnalyzerCapabilities

Bases: BaseModel

Language capability flags surfaced to analysis context and detectors.

AnalysisContext

Bases: BaseModel

Type-safe state container that flows through the analysis pipeline.

AnalysisContext implements the Context Object pattern: instead of threading a growing list of positional arguments through every detector, the analyzer populates a single Pydantic model with the raw source, parsed AST, computed metrics, and cross-file metadata. Each ViolationDetector reads only the fields it needs, and the model's type annotations give IDE autocomplete for free.

The lifecycle of a context mirrors the steps inside BaseAnalyzer.analyze:

  1. Created with raw code and optional path.
  2. Enriched by parse_code() (sets ast_tree and ast_status).
  3. Enriched by compute_metrics() (sets cyclomatic_summary, maintainability_index, lines_of_code).
  4. Enriched by _build_dependency_analysis() (sets dependency_analysis).
  5. Consumed by each detector in the pipeline.
ATTRIBUTE DESCRIPTION
code

Raw source text submitted for analysis.

TYPE: str

path

Filesystem path associated with the source, when known.

TYPE: str | None

language

Language identifier (e.g. "python", "typescript").

TYPE: str

ast_tree

Parsed syntax tree produced by the language-specific parse_code() hook, or None if parsing failed.

TYPE: ParserResult | None

ast_status

Tri-state AST availability marker: unsupported / parse_failed / parsed.

TYPE: AstStatus

capabilities

Analyzer capability declaration used to explain which analysis surfaces are expected to be available.

TYPE: AnalyzerCapabilities

cyclomatic_summary

Aggregated cyclomatic-complexity statistics for every block in the source.

TYPE: CyclomaticSummary | None

maintainability_index

Halstead / McCabe maintainability score (0-100 scale).

TYPE: float | None

lines_of_code

Physical line count of the source text.

TYPE: int

dependency_analysis

Language-specific dependency graph payload, or None when dependency analysis is disabled.

TYPE: object | None

violations

Mutable list of violations accumulated during pipeline execution.

TYPE: list[Violation]

other_files

Sibling file contents keyed by path, enabling cross-file detectors (e.g. duplicate-code).

TYPE: dict[str, str] | None

repository_imports

Per-file import index built from the wider repository, enabling coupling analysis.

TYPE: dict[str, list[str]] | None

See Also

BaseAnalyzer.analyze: Orchestrator that creates and enriches this context. ViolationDetector.detect: Consumer interface that reads context fields.

ViolationDetector

ViolationDetector()

Bases: ABC

Abstract base for individual violation-detection strategies.

Every concrete detector encapsulates exactly one kind of code-smell check — cyclomatic complexity, nesting depth, God Class, etc. — behind the uniform detect() contract defined here. This is the Strategy pattern: the DetectionPipeline iterates over a list of ViolationDetector instances without knowing (or caring) which algorithm each one uses.

Subclasses must implement:

  • detect() — inspect the AnalysisContext and return zero or more Violation objects.
  • name (property) — return a human-readable identifier used in error logging.

The helper build_violation() is provided so detectors never have to manually wire up principle IDs, severity defaults, or message selection — those are resolved from the detector's own DetectorConfig.

ATTRIBUTE DESCRIPTION
config

Per-detector configuration injected by the pipeline builder. Contains thresholds, severity, and violation message templates.

TYPE: ConfigT | None

rule_ids

Zen rule identifiers this detector is responsible for.

TYPE: list[str]

See Also

DetectionPipeline: Runner that invokes detect() on every registered detector. AnalysisContext: Shared state read by detectors. DetectorConfig: Base configuration schema for typed detector settings.

Initialize the detector with an empty rule-ID list.

Concrete rule IDs are injected later by the pipeline builder in BaseAnalyzer.build_pipeline after matching this detector to its zen-rule definitions.

Source code in src/mcp_zen_of_languages/analyzers/base.py
def __init__(self) -> None:
    """Initialize the detector with an empty rule-ID list.

    Concrete rule IDs are injected later by the pipeline builder in
    [`BaseAnalyzer.build_pipeline`][mcp_zen_of_languages.analyzers.base.BaseAnalyzer.build_pipeline]
    after matching this detector to its zen-rule definitions.
    """
    self.rule_ids = []
Attributes
name abstractmethod property
name

Human-readable identifier for this detector.

Used in log messages and error reports emitted by DetectionPipeline.run when a detector raises an unexpected exception.

RETURNS DESCRIPTION
str

Short, unique name such as "cyclomatic_complexity" or

TYPE: str

str

"god_class".

Functions
detect abstractmethod
detect(context, config)

Run this detector's algorithm and return any violations found.

Implementations should read only the fields they need from context (e.g. cyclomatic_summary for a complexity check) and compare against thresholds stored in config. Use build_violation to construct Violation instances with correct principle IDs and severity.

PARAMETER DESCRIPTION
context

Pipeline state carrying parsed AST, metrics, and source text populated by earlier analysis stages.

TYPE: AnalysisContext

config

Typed detector configuration holding thresholds, severity, and violation-message templates for this detector.

TYPE: AnalyzerConfig

RETURNS DESCRIPTION
list[Violation]

list[Violation]: Zero or more violations discovered by this strategy. An empty

list[Violation]

list signals clean code for this detector's concern.

Source code in src/mcp_zen_of_languages/analyzers/base.py
@abstractmethod
def detect(self, context: AnalysisContext, config: ConfigT) -> list[Violation]:
    """Run this detector's algorithm and return any violations found.

    Implementations should read only the fields they need from
    *context* (e.g. ``cyclomatic_summary`` for a complexity check) and
    compare against thresholds stored in *config*. Use
    [`build_violation`][mcp_zen_of_languages.analyzers.base.ViolationDetector.build_violation]
    to construct ``Violation`` instances with correct principle IDs and
    severity.

    Args:
        context (AnalysisContext): Pipeline state carrying parsed AST, metrics, and
            source text populated by earlier analysis stages.
        config (AnalyzerConfig): Typed detector configuration holding thresholds,
            severity, and violation-message templates for this
            detector.

    Returns:
        list[Violation]: Zero or more violations discovered by this strategy. An empty
        list signals clean code for this detector's concern.
    """
build_violation
build_violation(
    config,
    *,
    rule_id=None,
    message=None,
    contains=None,
    index=0,
    severity=None,
    location=None,
    suggestion=None,
    files=None,
)

Construct a Violation from detector config with optional overrides.

This convenience factory saves every detector from repeating the same principle-ID look-up, severity resolution, and message selection logic. The algorithm is:

  1. Resolve principle from config.principle, config.principle_id, or config.type (first non-None wins).
  2. If message is not given explicitly, delegate to config.select_violation_message() using contains or index to pick the right template.
  3. If severity is not given, fall back to config.severity or a default of 5.
PARAMETER DESCRIPTION
config

Detector configuration carrying principle metadata, severity, and violation-message templates.

TYPE: DetectorConfig

rule_id

Specific rule identifier for composite detectors. When provided, principle text, severity, and default message selection are resolved from that rule's preserved context. Default to None.

TYPE: str | None DEFAULT: None

message

Explicit violation message. When None, the message is auto-selected from the config's template list. Default to None.

TYPE: str | None DEFAULT: None

contains

Substring filter passed to select_violation_message to pick a matching template. Default to None.

TYPE: str | None DEFAULT: None

index

Zero-based position selecting one template from the config's violation_messages list (default 0). Default to 0.

TYPE: int | None DEFAULT: 0

severity

Override severity score (1-10). Falls back to the config-level severity when omitted. Default to None.

TYPE: int | None DEFAULT: None

location

Source location to attach to the violation, typically produced by LocationHelperMixin. Default to None.

TYPE: Location | None DEFAULT: None

suggestion

Remediation hint shown alongside the violation in reports and IDE integrations. Default to None.

TYPE: str | None DEFAULT: None

files

Related file paths included for cross-file violations such as duplicate-code detection. Default to None.

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
Violation

Fully populated Violation ready for collection by the

TYPE: Violation

Violation
Source code in src/mcp_zen_of_languages/analyzers/base.py
def build_violation(  # noqa: PLR0913
    self,
    config: ConfigT,
    *,
    rule_id: str | None = None,
    message: str | None = None,
    contains: str | None = None,
    index: int = 0,
    severity: int | None = None,
    location: Location | None = None,
    suggestion: str | None = None,
    files: list[str] | None = None,
) -> Violation:
    """Construct a ``Violation`` from detector config with optional overrides.

    This convenience factory saves every detector from repeating the
    same principle-ID look-up, severity resolution, and message
    selection logic. The algorithm is:

    1. Resolve ``principle`` from ``config.principle``,
       ``config.principle_id``, or ``config.type`` (first non-``None``
       wins).
    2. If *message* is not given explicitly, delegate to
       ``config.select_violation_message()`` using *contains* or
       *index* to pick the right template.
    3. If *severity* is not given, fall back to ``config.severity``
       or a default of ``5``.

    Args:
        config (DetectorConfig): Detector configuration carrying principle metadata,
            severity, and violation-message templates.
        rule_id (str | None, optional): Specific rule identifier for
            composite detectors. When provided, principle text, severity,
            and default message selection are resolved from that rule's
            preserved context. Default to None.
        message (str | None, optional): Explicit violation message. When ``None``, the
            message is auto-selected from the config's template list. Default to None.
        contains (str | None, optional): Substring filter passed to
            ``select_violation_message`` to pick a matching template. Default to None.
        index (int | None, optional): Zero-based position selecting one template from the
            config's ``violation_messages`` list (default ``0``). Default to 0.
        severity (int | None, optional): Override severity score (1-10). Falls back to the
            config-level severity when omitted. Default to None.
        location (Location | None, optional): Source location to attach to the violation, typically
            produced by
            [`LocationHelperMixin`][mcp_zen_of_languages.analyzers.base.LocationHelperMixin]. Default to None.
        suggestion (str | None, optional): Remediation hint shown alongside the violation in
            reports and IDE integrations. Default to None.
        files (list[str] | None, optional): Related file paths included for cross-file violations
            such as duplicate-code detection. Default to None.

    Returns:
        Violation: Fully populated ``Violation`` ready for collection by the
        [`DetectionPipeline`][mcp_zen_of_languages.analyzers.base.DetectionPipeline].
    """
    principle_resolver = getattr(config, "principle_for_rule", None)
    if callable(principle_resolver):
        principle = principle_resolver(rule_id)
    else:
        principle = (
            getattr(config, "principle", None)
            or getattr(config, "principle_id", None)
            or getattr(config, "type", "violation")
        )
    if message is None:
        selector = getattr(config, "select_violation_message", None)
        if callable(selector):
            message = selector(contains=contains, index=index, rule_id=rule_id)
        else:
            message = principle
    resolved_severity = severity
    if resolved_severity is None:
        severity_resolver = getattr(config, "severity_for_rule", None)
        if callable(severity_resolver):
            resolved_severity = severity_resolver(rule_id, 5)
        else:
            resolved_severity = getattr(config, "severity", None) or 5
    return Violation(
        principle=principle,
        severity=resolved_severity,
        message=message,
        location=location,
        suggestion=suggestion,
        files=files,
        rule_id=rule_id or getattr(config, "principle_id", None),
        detector_id=getattr(config, "type", None),
    )

DetectionPipeline

DetectionPipeline(detectors)

Fail-safe runner that executes detectors in sequence and collects violations.

DetectionPipeline implements the Pipeline / Chain of Responsibility pattern. It owns an ordered list of ViolationDetector instances and calls each one's detect() method against the shared AnalysisContext. Violations from every detector are merged into a single flat list.

Crucially, a failure in one detector is caught and logged but never aborts the remaining detectors. This isolation guarantee means a newly added or experimental detector cannot break production analysis.

See Also

ViolationDetector: Strategy interface executed by this pipeline. BaseAnalyzer.build_pipeline: Factory that assembles the detector list from zen rules.

Prepare the pipeline with an ordered detector sequence.

PARAMETER DESCRIPTION
detectors

Detector instances to execute, in the order they should run. Order can matter when later detectors depend on side-effects written to AnalysisContext.violations by earlier ones.

TYPE: list[ViolationDetector]

Source code in src/mcp_zen_of_languages/analyzers/base.py
def __init__(self, detectors: list[ViolationDetector]) -> None:
    """Prepare the pipeline with an ordered detector sequence.

    Args:
        detectors (list[ViolationDetector]): Detector instances to execute, in the order they
            should run. Order can matter when later detectors depend
            on side-effects written to ``AnalysisContext.violations``
            by earlier ones.
    """
    self.detectors = detectors
Functions
run
run(context, config)

Execute every detector against the shared context and merge results.

For each detector the pipeline resolves which configuration to use: if the detector carries its own config (injected during pipeline construction), that takes precedence; otherwise the analyzer-level config is used as a fallback.

If a detector raises an exception, the error is printed and the pipeline continues with the next detector — no violations from healthy detectors are lost.

PARAMETER DESCRIPTION
context

Shared analysis state populated by BaseAnalyzer.analyze before the pipeline starts.

TYPE: AnalysisContext

config

Fallback configuration used when a detector does not carry its own per-detector config.

TYPE: AnalyzerConfig

RETURNS DESCRIPTION
list[Violation]

list[Violation]: Flat list of violations aggregated from all detectors that

list[Violation]

executed successfully.

Source code in src/mcp_zen_of_languages/analyzers/base.py
def run(
    self,
    context: AnalysisContext,
    config: AnalyzerConfig | DetectorConfig,
) -> list[Violation]:
    """Execute every detector against the shared context and merge results.

    For each detector the pipeline resolves which configuration to
    use: if the detector carries its own ``config`` (injected during
    pipeline construction), that takes precedence; otherwise the
    analyzer-level *config* is used as a fallback.

    If a detector raises an exception, the error is printed and the
    pipeline continues with the next detector — no violations from
    healthy detectors are lost.

    Args:
        context (AnalysisContext): Shared analysis state populated by
            [`BaseAnalyzer.analyze`][mcp_zen_of_languages.analyzers.base.BaseAnalyzer.analyze]
            before the pipeline starts.
        config (AnalyzerConfig): Fallback configuration used when a detector does not
            carry its own per-detector config.

    Returns:
        list[Violation]: Flat list of violations aggregated from all detectors that
        executed successfully.
    """
    all_violations: list[Violation] = []

    for detector in self.detectors:
        try:
            detector_config = detector.config or config
            detector_name = self._detector_name(detector)
            with analysis_span(
                "detector.run",
                {"language": context.language, "detector": detector_name},
            ):
                violations = detector.detect(context, detector_config)
            all_violations.extend(
                [
                    self._enrich_violation(
                        context,
                        detector,
                        detector_config,
                        detector_name,
                        violation,
                    )
                    for violation in violations
                ],
            )
        except Exception:
            # Log error but continue with other detectors
            detector_name = self._detector_name(detector)
            logger.exception("Error in detector %s", detector_name)

    return all_violations

BaseAnalyzer

BaseAnalyzer(config=None)

Bases: ABC

Abstract skeleton for language-specific code analyzers.

BaseAnalyzer is the Template Method at the heart of the architecture. Its concrete analyze() method defines a fixed seven-step workflow — context creation → parsing → metrics → dependency analysis → detector pipeline → rules adapter → result assembly — while three abstract hooks let each language plug in its own behaviour:

Hook Responsibility
parse_code() Turn raw source text into a language AST
compute_metrics() Produce cyclomatic, maintainability, LOC
build_pipeline() Assemble the detector list from zen rules
capabilities() Declare AST/dependency/metrics support

Subclasses such as PythonAnalyzer implement only these hooks; the invariant orchestration logic is never duplicated.

ATTRIBUTE DESCRIPTION
config

Resolved analyzer configuration (base defaults merged with any overrides from zen-config.yaml).

TYPE: AnalyzerConfig

pipeline

Pre-built DetectionPipeline ready to execute against an AnalysisContext.

TYPE: DetectionPipeline

See Also

ViolationDetector: Strategy objects executed inside the pipeline. AnalysisContext: State container flowing through every stage. AnalyzerConfig: Configuration consumed by the analyzer and its detectors.

Bootstrap the analyzer with configuration and a detector pipeline.

If no config is supplied, the language-specific default_config() hook provides sensible defaults. build_pipeline() is called immediately so the detector list is ready before the first analyze() invocation.

PARAMETER DESCRIPTION
config

Explicit analyzer configuration. When None, the subclass's default_config() is used. Default to None.

TYPE: AnalyzerConfig | None DEFAULT: None

RAISES DESCRIPTION
TypeError

If config is not None and not an AnalyzerConfig instance.

Source code in src/mcp_zen_of_languages/analyzers/base.py
def __init__(self, config: AnalyzerConfig | None = None) -> None:
    """Bootstrap the analyzer with configuration and a detector pipeline.

    If no *config* is supplied, the language-specific ``default_config()``
    hook provides sensible defaults.  ``build_pipeline()`` is called
    immediately so the detector list is ready before the first
    ``analyze()`` invocation.

    Args:
        config (AnalyzerConfig | None, optional): Explicit analyzer configuration. When ``None``, the
            subclass's ``default_config()`` is used. Default to None.

    Raises:
        TypeError: If *config* is not ``None`` and not an
            ``AnalyzerConfig`` instance.
    """
    if config is not None and not isinstance(config, AnalyzerConfig):
        msg = "AnalyzerConfig instance required"
        raise TypeError(msg)
    self.config: AnalyzerConfig = config or self.default_config()
    self.pipeline: DetectionPipeline = self.build_pipeline()
Functions
default_config abstractmethod
default_config()

Provide the default configuration for this language.

Called by __init__ when the caller does not pass an explicit config. Language subclasses return their own typed config (e.g. PythonAnalyzerConfig) pre-populated with sensible defaults.

RETURNS DESCRIPTION
AnalyzerConfig

Language-appropriate configuration with default thresholds.

TYPE: AnalyzerConfig

Source code in src/mcp_zen_of_languages/analyzers/base.py
@abstractmethod
def default_config(self) -> AnalyzerConfig:
    """Provide the default configuration for this language.

    Called by ``__init__`` when the caller does not pass an explicit
    config. Language subclasses return their own typed config (e.g.
    [`PythonAnalyzerConfig`][mcp_zen_of_languages.analyzers.base.PythonAnalyzerConfig])
    pre-populated with sensible defaults.

    Returns:
        AnalyzerConfig: Language-appropriate configuration with default thresholds.
    """
language abstractmethod
language()

Return the language identifier this analyzer handles.

The string must match the keys used in zen-config.yaml and the language registry (e.g. "python", "typescript", "rust").

RETURNS DESCRIPTION
str

Lowercase language name used for rule lookup and result

TYPE: str

str

tagging.

Source code in src/mcp_zen_of_languages/analyzers/base.py
@abstractmethod
def language(self) -> str:
    """Return the language identifier this analyzer handles.

    The string must match the keys used in ``zen-config.yaml`` and
    the language registry (e.g. ``"python"``, ``"typescript"``,
    ``"rust"``).

    Returns:
        str: Lowercase language name used for rule lookup and result
        tagging.
    """
parse_code abstractmethod
parse_code(code)

Parse raw source text into a language-specific syntax tree.

This is the first Template Method hook called by analyze(). Python subclasses typically delegate to the ast module; other languages may use tree-sitter or custom parsers.

PARAMETER DESCRIPTION
code

Complete source text of the file being analyzed.

TYPE: str

RETURNS DESCRIPTION
ParserResult | None

ParserResult | None: Wrapped parse result, or None when the source cannot be

ParserResult | None

parsed (e.g. syntax errors). A None return does not abort

ParserResult | None

analysis — metric computation and detectors will proceed

ParserResult | None

with whatever data is available.

Source code in src/mcp_zen_of_languages/analyzers/base.py
@abstractmethod
def parse_code(self, code: str) -> ParserResult | None:
    """Parse raw source text into a language-specific syntax tree.

    This is the first Template Method hook called by ``analyze()``.
    Python subclasses typically delegate to the ``ast`` module; other
    languages may use tree-sitter or custom parsers.

    Args:
        code (str): Complete source text of the file being analyzed.

    Returns:
        ParserResult | None: Wrapped parse result, or ``None`` when the source cannot be
        parsed (e.g. syntax errors). A ``None`` return does not abort
        analysis — metric computation and detectors will proceed
        with whatever data is available.
    """
compute_metrics abstractmethod
compute_metrics(code, ast_tree)

Compute quantitative code-quality metrics for the given source.

This is the second Template Method hook. Implementations should calculate at least cyclomatic complexity, a maintainability index, and a physical line count. The returned tuple is unpacked by analyze() and stored on the AnalysisContext for downstream detectors.

PARAMETER DESCRIPTION
code

Source text to measure.

TYPE: str

ast_tree

Previously parsed syntax tree (may be None if parsing failed), useful for AST-driven metric tools.

TYPE: ParserResult | None

RETURNS DESCRIPTION
CyclomaticSummary | None

tuple[CyclomaticSummary | None, float | None, int]: Three-element tuple of ``(cyclomatic_summary,

float | None

maintainability_index, lines_of_code)``. Any element may be

int

None when the corresponding metric is unavailable.

Source code in src/mcp_zen_of_languages/analyzers/base.py
@abstractmethod
def compute_metrics(
    self,
    code: str,
    ast_tree: ParserResult | None,
) -> tuple[CyclomaticSummary | None, float | None, int]:
    """Compute quantitative code-quality metrics for the given source.

    This is the second Template Method hook. Implementations should
    calculate at least cyclomatic complexity, a maintainability index,
    and a physical line count. The returned tuple is unpacked by
    ``analyze()`` and stored on the
    [`AnalysisContext`][mcp_zen_of_languages.analyzers.base.AnalysisContext]
    for downstream detectors.

    Args:
        code (str): Source text to measure.
        ast_tree (ParserResult | None): Previously parsed syntax tree (may be ``None`` if
            parsing failed), useful for AST-driven metric tools.

    Returns:
        tuple[CyclomaticSummary | None, float | None, int]: Three-element tuple of ``(cyclomatic_summary,
        maintainability_index, lines_of_code)``. Any element may be
        ``None`` when the corresponding metric is unavailable.
    """
capabilities
capabilities()

Declare language features supported by this analyzer implementation.

The default implementation marks all advanced capabilities as unsupported. Language analyzers with concrete parser or dependency support override this method to advertise availability.

RETURNS DESCRIPTION
AnalyzerCapabilities

Capability flags consumed by AnalysisContext and downstream detectors.

TYPE: AnalyzerCapabilities

Source code in src/mcp_zen_of_languages/analyzers/base.py
def capabilities(self) -> AnalyzerCapabilities:
    """Declare language features supported by this analyzer implementation.

    The default implementation marks all advanced capabilities as
    unsupported. Language analyzers with concrete parser or dependency
    support override this method to advertise availability.

    Returns:
        AnalyzerCapabilities: Capability flags consumed by
            ``AnalysisContext`` and downstream detectors.
    """
    return AnalyzerCapabilities()
analyze
analyze(
    code,
    path=None,
    other_files=None,
    repository_imports=None,
    *,
    enable_external_tools=False,
    allow_temporary_tools=False,
)

Run the full analysis workflow against a single source file.

This is the Template Method: it defines the invariant seven-step algorithm that every language analyzer follows, calling abstract hooks (parse_code, compute_metrics) and the pre-built DetectionPipeline at the appropriate moments.

Workflow steps:

  1. Create an AnalysisContext from the inputs.
  2. Parse source via parse_code()context.ast_tree.
  3. Compute metrics via compute_metrics() → cyclomatic, maintainability, LOC.
  4. (Optional) Build dependency graph → context.dependency_analysis.
  5. Run the detector pipeline → initial violation list.
  6. Merge with RulesAdapter violations and attach rules_summary (gracefully skipped if the adapter is unavailable).
  7. Assemble and return the final AnalysisResult.
PARAMETER DESCRIPTION
code

Complete source text to analyze.

TYPE: str

path

Filesystem path of the source file, used for cross-file detectors and result metadata. Default to None.

TYPE: str | None DEFAULT: None

other_files

Map of sibling file paths to their contents, enabling detectors like duplicate-code that compare across files. Default to None.

TYPE: dict[str, str] | None DEFAULT: None

repository_imports

Per-file import lists from the wider repository, enabling coupling and dependency-fan detectors. Default to None.

TYPE: dict[str, list[str]] | None DEFAULT: None

enable_external_tools

Run allow-listed external linters/tools in best-effort mode for additional diagnostics. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_tools

Allow temporary-runner strategies (for example npx/uvx) when direct/no-install Default to False. resolution is unavailable.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
AnalysisResult

Fully populated analysis result containing metrics,

TYPE: AnalysisResult

AnalysisResult

violations, an overall quality score, and (when available)

AnalysisResult

a rules_summary.

Source code in src/mcp_zen_of_languages/analyzers/base.py
def analyze(  # noqa: PLR0913
    self,
    code: str,
    path: str | None = None,
    other_files: dict[str, str] | None = None,
    repository_imports: dict[str, list[str]] | None = None,
    *,
    enable_external_tools: bool = False,
    allow_temporary_tools: bool = False,
) -> AnalysisResult:
    """Run the full analysis workflow against a single source file.

    This is the **Template Method**: it defines the invariant
    seven-step algorithm that every language analyzer follows, calling
    abstract hooks (``parse_code``, ``compute_metrics``) and the
    pre-built [`DetectionPipeline`][mcp_zen_of_languages.analyzers.base.DetectionPipeline]
    at the appropriate moments.

    **Workflow steps:**

    1. Create an [`AnalysisContext`][mcp_zen_of_languages.analyzers.base.AnalysisContext]
       from the inputs.
    2. Parse source via ``parse_code()`` → ``context.ast_tree``.
    3. Compute metrics via ``compute_metrics()`` → cyclomatic,
       maintainability, LOC.
    4. (Optional) Build dependency graph →
       ``context.dependency_analysis``.
    5. Run the detector pipeline → initial violation list.
    6. Merge with ``RulesAdapter`` violations and attach
       ``rules_summary`` (gracefully skipped if the adapter is
       unavailable).
    7. Assemble and return the final ``AnalysisResult``.

    Args:
        code (str): Complete source text to analyze.
        path (str | None, optional): Filesystem path of the source file, used for
            cross-file detectors and result metadata. Default to None.
        other_files (dict[str, str] | None, optional): Map of sibling file paths to
            their contents, enabling detectors like duplicate-code that
            compare across files. Default to None.
        repository_imports (dict[str, list[str]] | None, optional): Per-file import
            lists from the wider repository, enabling coupling and
            dependency-fan detectors. Default to None.
        enable_external_tools (bool, optional): Run allow-listed external
            linters/tools in best-effort mode for additional diagnostics. Default to False.
        allow_temporary_tools (bool, optional): Allow temporary-runner strategies
            (for example ``npx``/``uvx``) when direct/no-install Default to False.
            resolution is unavailable.

    Returns:
        AnalysisResult: Fully populated analysis result containing metrics,
        violations, an overall quality score, and (when available)
        a ``rules_summary``.
    """
    with analysis_span(
        "analyzer.analyze",
        {"language": self.language(), "path": path or "<snippet>"},
    ):
        context = self._create_context(
            code=code,
            path=path,
            other_files=other_files,
            repository_imports=repository_imports,
        )

        with analysis_span("analyzer.parse", {"language": self.language()}):
            context.ast_tree = self.parse_code(code)
            if context.capabilities.supports_ast:
                context.ast_status = (
                    AstStatus.parsed
                    if context.ast_tree is not None
                    else AstStatus.parse_failed
                )

        with analysis_span("analyzer.metrics", {"language": self.language()}):
            cc, mi, loc = self.compute_metrics(code, context.ast_tree)
            context.cyclomatic_summary = cc
            context.maintainability_index = mi
            context.lines_of_code = loc

        with analysis_span("analyzer.dependencies", {"language": self.language()}):
            if self.config.enable_dependency_analysis:
                context.dependency_analysis = self._build_dependency_analysis(
                    context
                )
            context.external_analysis = self._build_external_analysis(
                context,
                enable_external_tools=enable_external_tools,
                allow_temporary_tools=allow_temporary_tools,
            )

        with analysis_span(
            "analyzer.pipeline",
            {
                "language": self.language(),
                "detector_count": len(self.pipeline.detectors),
            },
        ):
            violations = self.pipeline.run(context, self.config)

        result = self._build_result(context, violations)

        try:
            from mcp_zen_of_languages.adapters.rules_adapter import RulesAdapter
            from mcp_zen_of_languages.adapters.rules_adapter import (
                RulesAdapterConfig,
            )
            from mcp_zen_of_languages.models import DependencyAnalysis

            adapter_config = RulesAdapterConfig(
                max_nesting_depth=self.config.max_nesting_depth,
                max_cyclomatic_complexity=self.config.max_cyclomatic_complexity,
                min_maintainability_index=None,
            )

            adapter = RulesAdapter(language=self.language(), config=adapter_config)

            dep_analysis: DependencyAnalysis | None = None
            raw_dep = context.dependency_analysis
            if isinstance(raw_dep, DependencyAnalysis):
                dep_analysis = raw_dep
            elif isinstance(raw_dep, dict):
                try:
                    dep_analysis = DependencyAnalysis.model_validate(raw_dep)
                except (ValueError, TypeError):
                    dep_analysis = None

            with analysis_span(
                "analyzer.rules_adapter",
                {"language": self.language()},
            ):
                rules_violations = (
                    adapter.find_violations(
                        code=code,
                        cyclomatic_summary=context.cyclomatic_summary,
                        maintainability_index=context.maintainability_index,
                        dependency_analysis=dep_analysis,
                    )
                    if self.config.enable_pattern_detection
                    else []
                )
            all_violations = violations + rules_violations
            result.rules_summary = RulesSummary(
                **adapter.summarize_violations(all_violations),
            )
            result.violations = all_violations
            from mcp_zen_of_languages.dogmas.interface import attach_dogma_analysis

            result = attach_dogma_analysis(
                result.model_copy(update={"dogma_analysis": None})
            )
        except Exception as exc:  # noqa: BLE001
            logger.debug(
                "RulesAdapter integration failed; continuing", exc_info=exc
            )

        return result
build_pipeline
build_pipeline()

Assemble the detector pipeline from zen rules and config overrides.

The default implementation follows a four-stage process:

  1. Load canonical zen rules for self.language() via get_language_zen().
  2. Project those rules into DetectorConfig instances using the detector registry.
  3. Merge any overrides from zen-config.yaml (matched by detector type).
  4. Instantiate each registered detector, inject its config and rule IDs, and wrap them in a DetectionPipeline.

Language subclasses may override this method entirely to build a hand-crafted pipeline, but the rule-driven default covers most use-cases.

RETURNS DESCRIPTION
DetectionPipeline

Ready-to-run pipeline containing all detectors registered

TYPE: DetectionPipeline

DetectionPipeline

for this language.

RAISES DESCRIPTION
ValueError

If no zen rules are found for the language.

Source code in src/mcp_zen_of_languages/analyzers/base.py
def build_pipeline(self) -> DetectionPipeline:
    """Assemble the detector pipeline from zen rules and config overrides.

    The default implementation follows a four-stage process:

    1. Load canonical zen rules for ``self.language()`` via
       ``get_language_zen()``.
    2. Project those rules into
       [`DetectorConfig`][mcp_zen_of_languages.languages.configs.DetectorConfig]
       instances using the detector registry.
    3. Merge any overrides from ``zen-config.yaml`` (matched by
       detector ``type``).
    4. Instantiate each registered detector, inject its config and
       rule IDs, and wrap them in a
       [`DetectionPipeline`][mcp_zen_of_languages.analyzers.base.DetectionPipeline].

    Language subclasses may override this method entirely to build a
    hand-crafted pipeline, but the rule-driven default covers most
    use-cases.

    Returns:
        DetectionPipeline: Ready-to-run pipeline containing all detectors registered
        for this language.

    Raises:
        ValueError: If no zen rules are found for the language.
    """
    from mcp_zen_of_languages.analyzers import registry_bootstrap  # noqa: F401
    from mcp_zen_of_languages.analyzers.pipeline import PipelineConfig
    from mcp_zen_of_languages.analyzers.registry import REGISTRY
    from mcp_zen_of_languages.rules import get_language_zen

    lang_zen = get_language_zen(self.language())
    if lang_zen is None:
        msg = f"No zen rules for language: {self.language()}"
        raise ValueError(msg)
    if self._pipeline_config is None:
        self._pipeline_config = PipelineConfig(
            language=self.language(),
            detectors=[],
        )

    base_config = PipelineConfig(
        language=self.language(),
        detectors=REGISTRY.configs_from_rules(lang_zen),
    )
    if self._pipeline_config:
        merged = REGISTRY.merge_configs(
            base_config.detectors,
            self._pipeline_config.detectors,
        )
        pipeline_config = PipelineConfig(
            language=base_config.language,
            detectors=merged,
        )
    else:
        pipeline_config = base_config

    analyzer_defaults = AnalyzerConfig()
    detectors: list[ViolationDetector] = []
    for detector_config in pipeline_config.detectors:
        if detector_config.type == "analyzer_defaults":
            if isinstance(detector_config, AnalyzerConfig):
                analyzer_defaults = detector_config
            continue
        meta = REGISTRY.get(detector_config.type)
        detector = meta.detector_class()
        detector.config = detector_config
        detector.rule_ids = list(meta.rule_ids)
        detectors.append(detector)

    self.config = analyzer_defaults
    return DetectionPipeline(detectors)

LocationHelperMixin

Reusable utilities for mapping code artefacts to source locations.

This mixin is designed to be mixed into ViolationDetector subclasses that need to pin violations to exact line/column positions. It provides two complementary strategies:

  • Substring search — scan raw source text for a token and return the first matching Location.
  • AST-node conversion — extract lineno / col_offset from a Python (or compatible) AST node.
See Also

ViolationDetector.build_violation: Accepts a location kwarg typically produced by these helpers.

Functions
find_location_by_substring
find_location_by_substring(code, substring)

Locate the first occurrence of substring in the source text.

Scans code line-by-line and returns a one-based Location pointing to the first character of the match. When the substring is not found the method returns Location(line=1, column=1) as a safe fallback rather than raising.

PARAMETER DESCRIPTION
code

Full source text to search, potentially multi-line.

TYPE: str

substring

Token or identifier to locate (exact, case-sensitive match).

TYPE: str

RETURNS DESCRIPTION
Location

One-based Location of the first match, or (1, 1)

TYPE: Location

Location

when the substring does not appear in code.

Source code in src/mcp_zen_of_languages/analyzers/base.py
def find_location_by_substring(self, code: str, substring: str) -> Location:
    """Locate the first occurrence of *substring* in the source text.

    Scans *code* line-by-line and returns a one-based ``Location``
    pointing to the first character of the match. When the substring
    is not found the method returns ``Location(line=1, column=1)``
    as a safe fallback rather than raising.

    Args:
        code (str): Full source text to search, potentially multi-line.
        substring (str): Token or identifier to locate (exact,
            case-sensitive match).

    Returns:
        Location: One-based ``Location`` of the first match, or ``(1, 1)``
        when the substring does not appear in *code*.
    """
    lines = code.splitlines()
    for i, line in enumerate(lines, start=1):
        col = line.find(substring)
        if col >= 0:
            return Location(line=i, column=col + 1)
    return Location(line=1, column=1)
ast_node_to_location
ast_node_to_location(_ast_tree, node)

Extract a Location from a Python-style AST node.

Reads lineno and col_offset attributes via getattr so this helper works with any AST node that exposes those fields (stdlib ast, tree-sitter adapters, etc.). Column offsets are converted from zero-based to one-based to match the Location convention.

PARAMETER DESCRIPTION
_ast_tree

Parsed tree wrapper (currently unused but reserved for future tree-sitter adapters that need the root).

TYPE: ParserResult | None

node

AST node expected to carry lineno (int) and col_offset (int) attributes.

TYPE: object

RETURNS DESCRIPTION
Location

One-based Location when both attributes are present,

TYPE: Location | None

Location | None

otherwise None.

Source code in src/mcp_zen_of_languages/analyzers/base.py
def ast_node_to_location(
    self,
    _ast_tree: ParserResult | None,
    node: object | None,
) -> Location | None:
    """Extract a ``Location`` from a Python-style AST node.

    Reads ``lineno`` and ``col_offset`` attributes via ``getattr``
    so this helper works with any AST node that exposes those fields
    (stdlib ``ast``, tree-sitter adapters, etc.). Column offsets are
    converted from zero-based to one-based to match the ``Location``
    convention.

    Args:
        _ast_tree (ParserResult | None): Parsed tree wrapper (currently unused but reserved
            for future tree-sitter adapters that need the root).
        node (object): AST node expected to carry ``lineno`` (int) and
            ``col_offset`` (int) attributes.

    Returns:
        Location: One-based ``Location`` when both attributes are present,
        otherwise ``None``.
    """
    if node is None:
        return None

    try:
        # Extract tree if ParserResult wrapper
        # Try to get line/column from node
        lineno = getattr(node, "lineno", None)
        col_offset = getattr(node, "col_offset", None)

        if lineno is not None and col_offset is not None:
            return Location(line=int(lineno), column=int(col_offset) + 1)
    except (TypeError, ValueError):
        pass

    return None

Functions

mcp_zen_of_languages.analyzers.pipeline

Rule-to-config projection and pipeline override merging.

Zen principles defined in languages/*/rules.py carry metric thresholds (e.g. max_cyclomatic_complexity: 10) but detectors need typed DetectorConfig instances. This module bridges the gap with two operations:

  1. Projection — each principle's metrics dict is mapped onto the config fields of every detector registered for that rule, producing a typed config per detector.
  2. Override merging — user-supplied zen-config.yaml pipeline entries are merged over the rule-derived defaults by matching on DetectorConfig.type, so users can tighten or relax thresholds without modifying the canonical rule definitions.
See Also

mcp_zen_of_languages.analyzers.registry — performs the actual projection and merge logic that this module delegates to.

Classes

PipelineConfig

Bases: BaseModel

Typed container for the detector configs that drive a language pipeline.

A PipelineConfig holds an ordered list of DetectorConfig instances ready for execution by DetectionPipeline. Configs are either projected from zen principles via from_rules or loaded from zen-config.yaml and validated through the registry's discriminated-union :pyclass:TypeAdapter.

ATTRIBUTE DESCRIPTION
language

ISO-style language identifier (e.g. "python", "typescript").

TYPE: str

detectors

Ordered detector configs; validated on assignment via _validate_detectors.

TYPE: list[DetectorConfig]

Functions
from_rules classmethod
from_rules(language)

Build a complete pipeline by projecting a language's zen principles.

Loads the LanguageZenPrinciples for language, then delegates to configs_from_rules to project each principle's metrics onto the matching detector configs.

PARAMETER DESCRIPTION
language

Language key recognised by get_language_zen (e.g. "python").

TYPE: str

RETURNS DESCRIPTION
PipelineConfig

A fully populated PipelineConfig whose detector list reflects

TYPE: PipelineConfig

PipelineConfig

all zen principles defined for the language.

RAISES DESCRIPTION
ValueError

If no zen rules exist for language.

Examples:

>>> cfg = PipelineConfig.from_rules("python")
>>> cfg.language
'python'
Source code in src/mcp_zen_of_languages/analyzers/pipeline.py
@classmethod
def from_rules(cls, language: str) -> PipelineConfig:
    """Build a complete pipeline by projecting a language's zen principles.

    Loads the [`LanguageZenPrinciples`][mcp_zen_of_languages.rules.base_models.LanguageZenPrinciples]
    for *language*, then delegates to
    ``configs_from_rules``
    to project each principle's metrics onto the matching detector configs.

    Args:
        language (str): Language key recognised by [`get_language_zen`][mcp_zen_of_languages.rules.get_language_zen]
            (e.g. ``"python"``).

    Returns:
        PipelineConfig: A fully populated ``PipelineConfig`` whose detector list reflects
        all zen principles defined for the language.

    Raises:
        ValueError: If no zen rules exist for *language*.

    Examples:
        >>> cfg = PipelineConfig.from_rules("python")
        >>> cfg.language
        'python'
    """
    lang_zen = get_language_zen(language)
    if not lang_zen:
        msg = f"No zen rules for language: {language}"
        raise ValueError(msg)
    from mcp_zen_of_languages.analyzers.registry import REGISTRY

    detectors = REGISTRY.configs_from_rules(lang_zen)
    return cls(language=language, detectors=detectors)

Functions

project_rules_to_configs

project_rules_to_configs(lang_zen)

Convert zen principle metric thresholds into typed detector configs.

For every ZenPrinciple in lang_zen, the function resolves which detectors are registered for that rule and maps the principle's metrics dict onto each detector's config fields. Keys that don't match any registered config field raise immediately so typos in rule definitions are caught at startup.

PARAMETER DESCRIPTION
lang_zen

The complete set of zen principles for a single language, including metric thresholds and violation specs.

TYPE: LanguageZenPrinciples

RETURNS DESCRIPTION
list[DetectorConfig]

list[DetectorConfig]: Ordered detector configs with thresholds populated from the rules.

See Also

DetectorRegistry.configs_from_rules — the registry method this function delegates to.

Source code in src/mcp_zen_of_languages/analyzers/pipeline.py
def project_rules_to_configs(lang_zen: LanguageZenPrinciples) -> list[DetectorConfig]:
    """Convert zen principle metric thresholds into typed detector configs.

    For every [`ZenPrinciple`][mcp_zen_of_languages.rules.base_models.ZenPrinciple]
    in *lang_zen*, the function resolves which detectors are registered for
    that rule and maps the principle's ``metrics`` dict onto each detector's
    config fields.  Keys that don't match any registered config field raise
    immediately so typos in rule definitions are caught at startup.

    Args:
        lang_zen (LanguageZenPrinciples): The complete set of zen principles for a single language,
            including metric thresholds and violation specs.

    Returns:
        list[DetectorConfig]: Ordered detector configs with thresholds populated from the rules.

    See Also:
        ``DetectorRegistry.configs_from_rules``
        — the registry method this function delegates to.
    """
    from mcp_zen_of_languages.analyzers.registry import REGISTRY

    return REGISTRY.configs_from_rules(lang_zen)

merge_pipeline_overrides

merge_pipeline_overrides(base, overrides)

Layer user overrides from zen-config.yaml onto rule-derived defaults.

Override entries are matched to base entries by DetectorConfig.type. When a match is found, only the fields explicitly set in the override are applied (via model_dump(exclude_unset=True)), preserving every rule-derived default that the user didn't touch. Overrides whose type doesn't appear in the base are appended as new detector entries.

PARAMETER DESCRIPTION
base

Pipeline produced by PipelineConfig.from_rules with thresholds derived from canonical zen principles.

TYPE: PipelineConfig

overrides

Pipeline section from zen-config.yaml, or None to skip merging entirely.

TYPE: PipelineConfig | None

RETURNS DESCRIPTION
PipelineConfig

A new PipelineConfig containing the merged detector list.

TYPE: PipelineConfig

RAISES DESCRIPTION
ValueError

If overrides.language doesn't match base.language.

Source code in src/mcp_zen_of_languages/analyzers/pipeline.py
def merge_pipeline_overrides(
    base: PipelineConfig,
    overrides: PipelineConfig | None,
) -> PipelineConfig:
    """Layer user overrides from ``zen-config.yaml`` onto rule-derived defaults.

    Override entries are matched to base entries by ``DetectorConfig.type``.
    When a match is found, only the fields explicitly set in the override are
    applied (via ``model_dump(exclude_unset=True)``), preserving every
    rule-derived default that the user didn't touch.  Overrides whose type
    doesn't appear in the base are appended as new detector entries.

    Args:
        base (PipelineConfig): Pipeline produced by ``PipelineConfig.from_rules`` with
            thresholds derived from canonical zen principles.
        overrides (PipelineConfig | None): Pipeline section from ``zen-config.yaml``, or ``None``
            to skip merging entirely.

    Returns:
        PipelineConfig: A new ``PipelineConfig`` containing the merged detector list.

    Raises:
        ValueError: If *overrides.language* doesn't match *base.language*.
    """
    if overrides is None:
        return base
    if overrides.language != base.language:
        msg = "Override pipeline language mismatch"
        raise ValueError(msg)
    from mcp_zen_of_languages.analyzers.registry import REGISTRY

    merged = REGISTRY.merge_configs(base.detectors, overrides.detectors)
    return PipelineConfig(language=base.language, detectors=merged)

mcp_zen_of_languages.analyzers.analyzer_factory

Factory function for creating language-specific analyzers.

Centralises the mapping from language identifiers (including common aliases like "py", "ts", "rs") to their concrete BaseAnalyzer subclass. Callers never need to import individual analyzer modules; they go through create_analyzer and receive a fully configured instance.

Framework analyzers (React, Vue, Angular, Next.js, Pydantic, FastAPI, Django, SQLAlchemy) are loaded lazily on first use to keep import-time overhead low and to avoid surfacing circular-import issues during module initialisation.

Classes

Functions

supported_languages

supported_languages()

Return canonical language identifiers accepted by create_analyzer.

Source code in src/mcp_zen_of_languages/analyzers/analyzer_factory.py
def supported_languages() -> tuple[str, ...]:
    """Return canonical language identifiers accepted by ``create_analyzer``."""
    return SUPPORTED_LANGUAGES

create_analyzer

create_analyzer(
    language, config=None, pipeline_config=None
)

Create a language-specific analyzer instance.

Normalises language to lowercase and matches it against known identifiers and aliases. The returned analyzer has its detection pipeline pre-built from zen rules, optionally overlaid with pipeline_config overrides from zen-config.yaml.

Framework analyzers (React, Vue, Angular, etc.) are imported lazily on first use; language analyzers are imported eagerly at module load.

PARAMETER DESCRIPTION
language

Language name or alias (case-insensitive). Common aliases are accepted — see the table below.

TYPE: str

config

Global analyzer thresholds; passed through to the analyzer's __init__. Default to None.

TYPE: AnalyzerConfig | None DEFAULT: None

pipeline_config

Optional detector-level overrides merged on top of the rule-derived pipeline. Default to None.

TYPE: PipelineConfig | None DEFAULT: None

RETURNS DESCRIPTION
BaseAnalyzer

A configured BaseAnalyzer

TYPE: BaseAnalyzer

BaseAnalyzer

subclass ready to call analyze.

RAISES DESCRIPTION
ValueError

If language does not match any supported language.

Note

Supported languages and accepted aliases:

=============================== =================================== Language Accepted identifiers =============================== =================================== Python python, py TypeScript typescript, ts, tsx JavaScript javascript, js, jsx Go go Rust rust, rs SVG svg Bash bash, sh, shell PowerShell powershell, ps, pwsh Ansible ansible, ansible-playbook Ruby ruby, rb C++ cpp, c++, cc, cxx C# csharp, cs CSS css, scss, less Docker Compose docker_compose, docker-compose Dockerfile dockerfile, docker YAML yaml, yml GitHub Actions github-actions, github_actions, gha TOML toml XML xml JSON json SQL sql, postgresql, mysql, sqlite, mssql Terraform terraform, tf Markdown / MDX markdown, mdx LaTeX latex, tex, ltx, sty, bib, bibtex =============================== ===================================

Source code in src/mcp_zen_of_languages/analyzers/analyzer_factory.py
def create_analyzer(
    language: str,
    config: AnalyzerConfig | None = None,
    pipeline_config: PipelineConfig | None = None,
) -> BaseAnalyzer:
    """Create a language-specific analyzer instance.

    Normalises *language* to lowercase and matches it against known
    identifiers and aliases.  The returned analyzer has its detection
    pipeline pre-built from zen rules, optionally overlaid with
    *pipeline_config* overrides from ``zen-config.yaml``.

    Framework analyzers (React, Vue, Angular, etc.) are imported lazily on
    first use; language analyzers are imported eagerly at module load.

    Args:
        language (str): Language name or alias (case-insensitive).  Common
            aliases are accepted — see the table below.
        config (AnalyzerConfig | None, optional): Global analyzer thresholds; passed through to the
            analyzer's ``__init__``. Default to None.
        pipeline_config (PipelineConfig | None, optional): Optional detector-level overrides merged on
            top of the rule-derived pipeline. Default to None.

    Returns:
        BaseAnalyzer: A configured [`BaseAnalyzer`][mcp_zen_of_languages.analyzers.base.BaseAnalyzer]
        subclass ready to call ``analyze``.

    Raises:
        ValueError: If *language* does not match any supported language.

    Note:
        **Supported languages and accepted aliases:**

        =============================== ===================================
        Language                        Accepted identifiers
        =============================== ===================================
        Python                          ``python``, ``py``
        TypeScript                      ``typescript``, ``ts``, ``tsx``
        JavaScript                      ``javascript``, ``js``, ``jsx``
        Go                              ``go``
        Rust                            ``rust``, ``rs``
        SVG                             ``svg``
        Bash                            ``bash``, ``sh``, ``shell``
        PowerShell                      ``powershell``, ``ps``, ``pwsh``
        Ansible                         ``ansible``, ``ansible-playbook``
        Ruby                            ``ruby``, ``rb``
        C++                             ``cpp``, ``c++``, ``cc``, ``cxx``
        C#                              ``csharp``, ``cs``
        CSS                             ``css``, ``scss``, ``less``
        Docker Compose                  ``docker_compose``, ``docker-compose``
        Dockerfile                      ``dockerfile``, ``docker``
        YAML                            ``yaml``, ``yml``
        GitHub Actions                  ``github-actions``, ``github_actions``, ``gha``
        TOML                            ``toml``
        XML                             ``xml``
        JSON                            ``json``
        SQL                             ``sql``, ``postgresql``, ``mysql``, ``sqlite``, ``mssql``
        Terraform                       ``terraform``, ``tf``
        Markdown / MDX                  ``markdown``, ``mdx``
        LaTeX                           ``latex``, ``tex``, ``ltx``, ``sty``, ``bib``, ``bibtex``
        =============================== ===================================
    """
    lang = language.lower()
    if lang in _FRAMEWORK_ALIASES:
        analyzer_class = _resolve_framework_class(lang)
    else:
        analyzer_class = _ANALYZERS_BY_ALIAS.get(lang)
    if analyzer_class is None:
        msg = f"Unsupported language: {language}"
        raise ValueError(msg)
    return analyzer_class(config=config, pipeline_config=pipeline_config)

mcp_zen_of_languages.adapters.rules_adapter

Legacy bridge that adapts canonical zen principles into flat Violation models.

The RulesAdapter implements the Adapter pattern: it translates the rich ZenPrinciple / LanguageZenPrinciples hierarchy defined in rules/base_models.py into the Violation list that the original monolithic analyzer pipeline expected. New code should prefer the DetectionPipeline architecture; this adapter exists so that callers written against the old dictionary-based API continue to work without modification.

All data access uses Pydantic model attributes — never raw dictionary keys.

Classes

RulesAdapterConfig

Bases: BaseModel

Threshold overrides that callers pass to RulesAdapter.

When a field is None the adapter falls back to the threshold embedded in the ZenPrinciple.metrics dictionary. Setting an explicit value here takes precedence, allowing project-level customisation without editing the canonical rule definitions.

ATTRIBUTE DESCRIPTION
max_nesting_depth

Override for the maximum indentation depth before a nesting violation is emitted.

TYPE: int | None

max_cyclomatic_complexity

Override for the cyclomatic-complexity ceiling.

TYPE: int | None

min_maintainability_index

Override for the minimum acceptable maintainability index (Radon scale).

TYPE: float | None

severity_threshold

Floor used by get_critical_violations to filter low-severity findings. Defaults to 5 (1-10 scale).

TYPE: int

RulesAdapter

RulesAdapter(language, config=None)

Legacy bridge that projects ZenPrinciple definitions onto flat Violation lists.

The adapter iterates every principle registered for a language, inspects its metrics dictionary, and applies lightweight heuristic checks (nesting depth, cyclomatic complexity, maintainability index, dependency cycles, and regex-based pattern matching). Each failed check produces a Violation that downstream reporters can render.

This class exists to preserve backward-compatibility with the pre-pipeline analysis path. New detectors should be implemented as ViolationDetector subclasses and registered via DetectionPipeline.

See Also

analyzers.pipeline.DetectionPipeline — the modern replacement. rules.base_models.ZenPrinciple — canonical principle definitions.

Bind the adapter to a language and optional threshold overrides.

Loads the LanguageZenPrinciples for language from the global ZEN_REGISTRY on construction so that subsequent find_violations calls can iterate the principle set without repeated lookups.

PARAMETER DESCRIPTION
language

Lowercase language key (e.g. "python", "rust").

TYPE: str

config

Threshold overrides. When None, a default RulesAdapterConfig with all overrides unset is created. Default to None.

TYPE: RulesAdapterConfig | None DEFAULT: None

Source code in src/mcp_zen_of_languages/adapters/rules_adapter.py
def __init__(self, language: str, config: RulesAdapterConfig | None = None) -> None:
    """Bind the adapter to a language and optional threshold overrides.

    Loads the ``LanguageZenPrinciples`` for *language* from the global
    ``ZEN_REGISTRY`` on construction so that subsequent ``find_violations``
    calls can iterate the principle set without repeated lookups.

    Args:
        language (str): Lowercase language key (e.g. ``"python"``,
            ``"rust"``).
        config (RulesAdapterConfig | None, optional): Threshold overrides.  When
            ``None``, a default ``RulesAdapterConfig`` with all overrides
            unset is created. Default to None.
    """
    self.language = language
    self.config = config or RulesAdapterConfig()
    self.lang_zen: LanguageZenPrinciples | None = get_language_zen(language)
Functions
find_violations
find_violations(
    code,
    cyclomatic_summary=None,
    maintainability_index=None,
    dependency_analysis=None,
)

Walk every zen principle for this language and apply lightweight heuristic checks.

Each principle's metrics dictionary determines which checks fire. For example, a principle containing max_nesting_depth triggers the nesting check, while detect_circular_dependencies triggers the dependency-cycle check. Results from all principles are concatenated into a single flat list.

Note

The check pipeline runs in a fixed order for each principle: metrics extraction → nesting depth → cyclomatic complexity → maintainability index → dependency analysis → pattern matching. A check is skipped when its corresponding metric key is absent from the principle or when the required upstream data (e.g. cyclomatic_summary) is None.

PARAMETER DESCRIPTION
code

Source code to analyse.

TYPE: str

cyclomatic_summary

Pre-computed cyclomatic-complexity metrics, typically produced by radon. Default to None.

TYPE: CyclomaticSummary | None DEFAULT: None

maintainability_index

Radon maintainability index (0-100 scale). Default to None.

TYPE: float | None DEFAULT: None

dependency_analysis

Import-graph analysis produced by upstream dependency resolution. Default to None.

TYPE: DependencyAnalysis | None DEFAULT: None

RETURNS DESCRIPTION
list[Violation]

list[Violation]: All violations found across every registered

list[Violation]

principle.

Source code in src/mcp_zen_of_languages/adapters/rules_adapter.py
def find_violations(
    self,
    code: str,
    cyclomatic_summary: CyclomaticSummary | None = None,
    maintainability_index: float | None = None,
    dependency_analysis: DependencyAnalysis | None = None,
) -> list[Violation]:
    """Walk every zen principle for this language and apply lightweight heuristic checks.

    Each principle's ``metrics`` dictionary determines which checks fire.
    For example, a principle containing ``max_nesting_depth`` triggers the
    nesting check, while ``detect_circular_dependencies`` triggers the
    dependency-cycle check.  Results from all principles are concatenated
    into a single flat list.

    Note:
        The check pipeline runs in a fixed order for each principle:
        metrics extraction → nesting depth → cyclomatic complexity →
        maintainability index → dependency analysis → pattern matching.
        A check is skipped when its corresponding metric key is absent
        from the principle or when the required upstream data
        (e.g. ``cyclomatic_summary``) is ``None``.

    Args:
        code (str): Source code to analyse.
        cyclomatic_summary (CyclomaticSummary | None, optional): Pre-computed
            cyclomatic-complexity metrics, typically produced by ``radon``. Default to None.
        maintainability_index (float | None, optional): Radon maintainability index
            (0-100 scale). Default to None.
        dependency_analysis (DependencyAnalysis | None, optional): Import-graph
            analysis produced by upstream dependency resolution. Default to None.

    Returns:
        list[Violation]: All violations found across every registered
        principle.
    """
    violations: list[Violation] = []

    if not self.lang_zen:
        return violations

    for principle in self.lang_zen.principles:
        principle_metrics = principle.metrics or {}

        violations.extend(
            self._check_nesting_depth(code, principle, principle_metrics),
        )

        if cyclomatic_summary is not None:
            violations.extend(
                self._check_cyclomatic_complexity(
                    cyclomatic_summary,
                    principle,
                    principle_metrics,
                ),
            )

        if maintainability_index is not None:
            violations.extend(
                self._check_maintainability_index(
                    maintainability_index,
                    principle,
                    principle_metrics,
                ),
            )

        if dependency_analysis is not None:
            violations.extend(
                self._check_dependencies(
                    dependency_analysis,
                    principle,
                    principle_metrics,
                ),
            )

        violations.extend(self._check_patterns(code, principle))

    return violations
get_critical_violations
get_critical_violations(violations)

Return only violations whose severity meets or exceeds config.severity_threshold.

PARAMETER DESCRIPTION
violations

Full violation list, typically from find_violations.

TYPE: list[Violation]

RETURNS DESCRIPTION
list[Violation]

list[Violation]: Subset of violations at or above the configured

list[Violation]

severity floor.

Source code in src/mcp_zen_of_languages/adapters/rules_adapter.py
def get_critical_violations(self, violations: list[Violation]) -> list[Violation]:
    """Return only violations whose severity meets or exceeds ``config.severity_threshold``.

    Args:
        violations (list[Violation]): Full violation list, typically from
            ``find_violations``.

    Returns:
        list[Violation]: Subset of *violations* at or above the configured
        severity floor.
    """
    return [v for v in violations if v.severity >= self.config.severity_threshold]
get_detector_config
get_detector_config(detector_name)

Aggregate zen-principle metrics into a single DetectorConfig.

Walks every principle for the bound language and collects thresholds, regex patterns, and metadata that match detector_name. The result lets detectors stay language-agnostic — they only consume the config shape, never raw principle objects.

PARAMETER DESCRIPTION
detector_name

Key used to filter relevant metrics (e.g. "cyclomatic_complexity").

TYPE: str

RETURNS DESCRIPTION
DetectorConfig

A DetectorConfig ready to be passed into a

TYPE: DetectorConfig

DetectorConfig

ViolationDetector.detect call.

See Also

rules.base_models.DetectorConfig — the returned Pydantic model.

Source code in src/mcp_zen_of_languages/adapters/rules_adapter.py
def get_detector_config(self, detector_name: str) -> DetectorConfig:
    """Aggregate zen-principle metrics into a single ``DetectorConfig``.

    Walks every principle for the bound language and collects thresholds,
    regex patterns, and metadata that match *detector_name*.  The result
    lets detectors stay language-agnostic — they only consume the config
    shape, never raw principle objects.

    Args:
        detector_name (str): Key used to filter relevant metrics (e.g.
            ``"cyclomatic_complexity"``).

    Returns:
        DetectorConfig: A ``DetectorConfig`` ready to be passed into a
        ``ViolationDetector.detect`` call.

    See Also:
        ``rules.base_models.DetectorConfig`` — the returned Pydantic model.
    """
    from mcp_zen_of_languages.rules.base_models import DetectorConfig

    thresholds: dict[str, float] = {}
    patterns: list[str] = []
    metadata: dict[str, object] = {}

    if not self.lang_zen:
        return DetectorConfig(
            name=detector_name,
            thresholds=thresholds,
            patterns=patterns,
            metadata=metadata,
        )

    # Aggregate metrics across principles and choose values relevant to detector
    for p in self.lang_zen.principles:
        if p.metrics:
            for k, v in p.metrics.items():
                # Simple heuristic: include metrics that mention detector name
                # or common keys
                if detector_name in k or k in (
                    "max_function_length",
                    "max_cyclomatic_complexity",
                    "max_nesting_depth",
                    "max_class_length",
                    "min_maintainability_index",
                ):
                    try:
                        thresholds[k] = float(v)
                    except (ValueError, TypeError):
                        metadata[k] = v
        if p.detectable_patterns:
            patterns.extend(p.detectable_patterns)

    return DetectorConfig(
        name=detector_name,
        thresholds=thresholds,
        patterns=patterns,
        metadata=metadata,
    )
summarize_violations
summarize_violations(violations)

Bucket violations into four severity bands and return per-band counts.

Bands: critical (9-10), high (7-8), medium (4-6), low (1-3).

PARAMETER DESCRIPTION
violations

Violation list to summarise.

TYPE: list[Violation]

RETURNS DESCRIPTION
dict[str, int]

dict[str, int]: Dict with keys "critical", "high", "medium", "low"

dict[str, int]

mapped to integer counts.

Source code in src/mcp_zen_of_languages/adapters/rules_adapter.py
def summarize_violations(self, violations: list[Violation]) -> dict[str, int]:
    """Bucket violations into four severity bands and return per-band counts.

    Bands: *critical* (9-10), *high* (7-8), *medium* (4-6), *low* (1-3).

    Args:
        violations (list[Violation]): Violation list to summarise.

    Returns:
        dict[str, int]: Dict with keys ``"critical"``, ``"high"``, ``"medium"``, ``"low"``
        mapped to integer counts.
    """
    summary = {
        "critical": 0,  # 9-10
        "high": 0,  # 7-8
        "medium": 0,  # 4-6
        "low": 0,  # 1-3
    }

    for violation in violations:
        if violation.severity >= SEVERITY_CRITICAL:
            summary["critical"] += 1
        elif violation.severity >= SEVERITY_HIGH:
            summary["high"] += 1
        elif violation.severity >= SEVERITY_MEDIUM:
            summary["medium"] += 1
        else:
            summary["low"] += 1

    return summary

Functions

Language analyzers

mcp_zen_of_languages.languages.python.analyzer

Python-specific analyzer built on the Template Method / Strategy architecture.

This module houses PythonAnalyzer, the reference language implementation. It plugs Python parsing (stdlib ast), radon-based metrics, and the Python detector pipeline into the shared BaseAnalyzer skeleton so that every zen-principle check runs in a deterministic, fail-safe sequence.

See Also

mcp_zen_of_languages.analyzers.base.BaseAnalyzer for the template method that orchestrates parsing → metrics → detection → result building.

Classes

PythonAnalyzer

PythonAnalyzer(config=None, pipeline_config=None)

Bases: BaseAnalyzer, LocationHelperMixin

Analyze Python source code against zen principles.

PythonAnalyzer is the reference language implementation. It overrides the three Template Method hooks — parse_code, compute_metrics, and build_pipeline — to wire stdlib ast parsing, radon-based metrics collection, and the full suite of Python-specific violation detectors.

The analyzer also builds an import-level dependency graph so that cross-file detectors (circular dependencies, duplicate implementations, deep inheritance) can reason about the broader codebase.

ATTRIBUTE DESCRIPTION
_pipeline_config

Optional overrides applied on top of the rule-derived detector defaults when constructing the detection pipeline.

Initialise the Python analyzer with optional config overrides.

PARAMETER DESCRIPTION
config

Typed analyzer configuration controlling thresholds such as max cyclomatic complexity or nesting depth. When None, default_config() supplies sensible defaults. Default to None.

TYPE: AnalyzerConfig | None DEFAULT: None

pipeline_config

Pipeline-level overrides merged on top of the rule-derived detector defaults. Typically loaded from the pipelines: section of zen-config.yaml. Default to None.

TYPE: PipelineConfig | None DEFAULT: None

Source code in src/mcp_zen_of_languages/languages/python/analyzer.py
def __init__(
    self,
    config: AnalyzerConfig | None = None,
    pipeline_config: PipelineConfig | None = None,
) -> None:
    """Initialise the Python analyzer with optional config overrides.

    Args:
        config (AnalyzerConfig | None, optional): Typed analyzer configuration controlling thresholds such as
            max cyclomatic complexity or nesting depth.  When ``None``,
            ``default_config()`` supplies sensible defaults. Default to None.
        pipeline_config (PipelineConfig | None, optional): Pipeline-level overrides merged on top of the
            rule-derived detector defaults.  Typically loaded from the
            ``pipelines:`` section of ``zen-config.yaml``. Default to None.
    """
    self._pipeline_config = pipeline_config
    super().__init__(config=config)
Functions
default_config
default_config()

Return the baseline Python configuration.

These defaults are used when no explicit config is passed to the constructor. They encode the recommended thresholds for idiomatic Python code (e.g. max nesting depth of 3, max cyclomatic complexity of 10).

RETURNS DESCRIPTION
PythonAnalyzerConfig

A fresh config instance with community-standard thresholds pre-populated.

TYPE: PythonAnalyzerConfig

Source code in src/mcp_zen_of_languages/languages/python/analyzer.py
def default_config(self) -> PythonAnalyzerConfig:
    """Return the baseline Python configuration.

    These defaults are used when no explicit ``config`` is passed to the
    constructor.  They encode the recommended thresholds for idiomatic
    Python code (e.g. max nesting depth of 3, max cyclomatic complexity
    of 10).

    Returns:
        PythonAnalyzerConfig: A fresh config instance with community-standard
            thresholds pre-populated.
    """
    return PythonAnalyzerConfig()
language
language()

Return "python" as the language identifier.

This string keys into the analyzer factory and the detector registry, ensuring the correct set of detectors is loaded for Python source.

RETURNS DESCRIPTION
str

Always "python".

TYPE: str

Source code in src/mcp_zen_of_languages/languages/python/analyzer.py
def language(self) -> str:
    """Return ``"python"`` as the language identifier.

    This string keys into the analyzer factory and the detector registry,
    ensuring the correct set of detectors is loaded for Python source.

    Returns:
        str: Always ``"python"``.
    """
    return "python"
capabilities
capabilities()

Declare Python analyzer support for AST, dependencies, and metrics.

Source code in src/mcp_zen_of_languages/languages/python/analyzer.py
def capabilities(self) -> AnalyzerCapabilities:
    """Declare Python analyzer support for AST, dependencies, and metrics."""
    return AnalyzerCapabilities(
        supports_ast=True,
        supports_dependency_analysis=True,
        supports_metrics=True,
    )
parse_code
parse_code(code)

Parse Python source into a ParserResult representation.

Delegates to parse_python, which already handles backend selection and returns the canonical ParserResult consumed by detectors.

PARAMETER DESCRIPTION
code

Raw Python source text to parse.

TYPE: str

RETURNS DESCRIPTION
ParserResult | None

ParserResult | None: Parse tree, or None if a SyntaxError or other parse failure occurs.

Source code in src/mcp_zen_of_languages/languages/python/analyzer.py
def parse_code(self, code: str) -> ParserResult | None:
    """Parse Python source into a ``ParserResult`` representation.

    Delegates to ``parse_python``, which already handles backend selection
    and returns the canonical ``ParserResult`` consumed by detectors.

    Args:
        code (str): Raw Python source text to parse.

    Returns:
        ParserResult | None: Parse tree, or ``None`` if a
            ``SyntaxError`` or other parse failure occurs.
    """
    from mcp_zen_of_languages.utils.parsers import parse_python

    try:
        return parse_python(code)
    except Exception:  # noqa: BLE001
        return None
compute_metrics
compute_metrics(code, _ast_tree)

Collect cyclomatic complexity, maintainability index, and line count.

Uses MetricsCollector which internally calls radon for cyclomatic complexity per function block and Halstead-based maintainability index. These metrics feed into several detectors (e.g. CyclomaticComplexityDetector, severity scaling).

PARAMETER DESCRIPTION
code

Python source text to measure.

TYPE: str

_ast_tree

Parsed syntax tree (currently unused by radon but accepted for API symmetry with other language analyzers).

TYPE: ParserResult | None

RETURNS DESCRIPTION
tuple[CyclomaticSummary | None, float | None, int]

tuple[CyclomaticSummary | None, float | None, int]: Three-element tuple of (cyclomatic summary, maintainability index, total lines).

Source code in src/mcp_zen_of_languages/languages/python/analyzer.py
def compute_metrics(
    self,
    code: str,
    _ast_tree: ParserResult | None,
) -> tuple[CyclomaticSummary | None, float | None, int]:
    """Collect cyclomatic complexity, maintainability index, and line count.

    Uses ``MetricsCollector`` which internally calls radon for cyclomatic
    complexity per function block and Halstead-based maintainability index.
    These metrics feed into several detectors (e.g.
    ``CyclomaticComplexityDetector``, severity scaling).

    Args:
        code (str): Python source text to measure.
        _ast_tree (ParserResult | None): Parsed syntax tree (currently unused by radon but
            accepted for API symmetry with other language analyzers).

    Returns:
        tuple[CyclomaticSummary | None, float | None, int]: Three-element
            tuple of (cyclomatic summary, maintainability index, total
            lines).
    """
    from mcp_zen_of_languages.metrics.collector import MetricsCollector

    return MetricsCollector.collect(code)
build_pipeline
build_pipeline()

Assemble the Python detection pipeline from the detector registry.

Delegates to the base class which looks up all detectors registered for "python" in the global registry and wires them with configs derived from the active zen rules and any pipeline_config overrides.

RETURNS DESCRIPTION
DetectionPipeline

Ordered pipeline of Python violation detectors.

TYPE: DetectionPipeline

Source code in src/mcp_zen_of_languages/languages/python/analyzer.py
def build_pipeline(self) -> DetectionPipeline:
    """Assemble the Python detection pipeline from the detector registry.

    Delegates to the base class which looks up all detectors registered
    for ``"python"`` in the global registry and wires them with configs
    derived from the active zen rules and any ``pipeline_config`` overrides.

    Returns:
        DetectionPipeline: Ordered pipeline of Python violation detectors.
    """
    return super().build_pipeline()

mcp_zen_of_languages.languages.typescript.analyzer

Language-specific analyzer implementation for typescript source files.

Classes

TypeScriptAnalyzer

TypeScriptAnalyzer(config=None, pipeline_config=None)

Bases: BaseAnalyzer

Analyzer for TypeScript source files focusing on type-system discipline.

TypeScript analysis is unique because the language offers an opt-in type system layered over JavaScript. Without enforcement, codebases drift toward any-heavy, loosely-typed patterns that negate TypeScript's value. This analyzer uses regex-based heuristic detectors (no TS AST parser yet) to surface anti-patterns such as any abuse, missing return types, and non-null assertion overuse.

Note

Because no tree-sitter or TypeScript compiler API parser is wired, parse_code returns None and all detectors operate on raw source text via regular expressions.

See Also

TypeScriptAnalyzerConfig for language-specific threshold defaults.

Initialize instance.

PARAMETER DESCRIPTION
config

Typed detector or analyzer configuration that controls thresholds. Default to None.

TYPE: AnalyzerConfig | None DEFAULT: None

pipeline_config

Optional pipeline overrides used to customize detector configuration. Default to None.

TYPE: 'PipelineConfig' | None DEFAULT: None

Source code in src/mcp_zen_of_languages/languages/typescript/analyzer.py
def __init__(
    self,
    config: AnalyzerConfig | None = None,
    pipeline_config: PipelineConfig | None = None,
) -> None:
    """Initialize instance.

    Args:
        config (AnalyzerConfig | None, optional): Typed detector or analyzer configuration that controls thresholds. Default to None.
        pipeline_config ('PipelineConfig' | None, optional): Optional pipeline overrides used to customize detector configuration. Default to None.
    """
    self._pipeline_config = pipeline_config
    super().__init__(config=config)
Functions
default_config
default_config()

Return default analyzer configuration for this language.

RETURNS DESCRIPTION
TypeScriptAnalyzerConfig

Default analyzer settings for the current language implementation.

TYPE: TypeScriptAnalyzerConfig

Source code in src/mcp_zen_of_languages/languages/typescript/analyzer.py
def default_config(self) -> TypeScriptAnalyzerConfig:
    """Return default analyzer configuration for this language.

    Returns:
        TypeScriptAnalyzerConfig: Default analyzer settings for the current language implementation.
    """
    return TypeScriptAnalyzerConfig()
language
language()

Return the analyzer language key.

RETURNS DESCRIPTION
str

Identifier string consumed by callers.

TYPE: str

Source code in src/mcp_zen_of_languages/languages/typescript/analyzer.py
def language(self) -> str:
    """Return the analyzer language key.

    Returns:
        str: Identifier string consumed by callers.
    """
    return "typescript"
capabilities
capabilities()

Declare support for import/require dependency extraction.

Source code in src/mcp_zen_of_languages/languages/typescript/analyzer.py
def capabilities(self) -> AnalyzerCapabilities:
    """Declare support for import/require dependency extraction."""
    return AnalyzerCapabilities(supports_dependency_analysis=True)
parse_code
parse_code(_code)

Parse source text into a language parser result when available.

PARAMETER DESCRIPTION
_code

Source code text being parsed or analyzed.

TYPE: str

RETURNS DESCRIPTION
ParserResult | None

ParserResult | None: Normalized parser output, or None when parsing is unavailable.

Source code in src/mcp_zen_of_languages/languages/typescript/analyzer.py
def parse_code(self, _code: str) -> ParserResult | None:
    """Parse source text into a language parser result when available.

    Args:
        _code (str): Source code text being parsed or analyzed.

    Returns:
        ParserResult | None: Normalized parser output, or ``None`` when parsing is unavailable.
    """
    # No TS parser wired yet; return None to allow heuristic detectors to run.
    return None
compute_metrics
compute_metrics(code, _ast_tree)

Compute complexity, maintainability, and line-count metrics.

PARAMETER DESCRIPTION
code

Source code text being parsed or analyzed.

TYPE: str

_ast_tree

Parsed syntax tree produced by the language parser, when available.

TYPE: ParserResult | None

RETURNS DESCRIPTION
tuple[CyclomaticSummary | None, float | None, int]

tuple[CyclomaticSummary | None, float | None, int]: Tuple containing computed metrics in analyzer-defined order.

Source code in src/mcp_zen_of_languages/languages/typescript/analyzer.py
def compute_metrics(
    self,
    code: str,
    _ast_tree: ParserResult | None,
) -> tuple[CyclomaticSummary | None, float | None, int]:
    """Compute complexity, maintainability, and line-count metrics.

    Args:
        code (str): Source code text being parsed or analyzed.
        _ast_tree (ParserResult | None): Parsed syntax tree produced by the language parser, when available.

    Returns:
        tuple[CyclomaticSummary | None, float | None, int]: Tuple containing computed metrics in analyzer-defined order.
    """
    return None, None, len(code.splitlines())
build_pipeline
build_pipeline()

Build the detector pipeline for this analyzer.

RETURNS DESCRIPTION
DetectionPipeline

Pipeline instance used to run configured detectors.

TYPE: DetectionPipeline

Source code in src/mcp_zen_of_languages/languages/typescript/analyzer.py
def build_pipeline(self) -> DetectionPipeline:
    """Build the detector pipeline for this analyzer.

    Returns:
        DetectionPipeline: Pipeline instance used to run configured detectors.
    """
    return super().build_pipeline()

mcp_zen_of_languages.languages.rust.analyzer

Language-specific analyzer implementation for rust source files.

Classes

RustAnalyzer

RustAnalyzer(config=None, pipeline_config=None)

Bases: BaseAnalyzer

Analyzer for Rust source files centered on ownership safety and idiomatic patterns.

Rust analysis is distinct because the language's borrow checker enforces memory safety at compile time, but developers can bypass those guarantees with unsafe blocks, unwrap() calls, and excessive clone(). This analyzer applies regex-based detectors to flag those escape hatches alongside idiomatic checks for newtype patterns, iterator preference, and standard-trait implementations.

Note

No Rust AST parser is currently wired; parse_code returns None and detectors operate on raw source text.

See Also

RustUnwrapUsageDetector, RustUnsafeBlocksDetector for the highest-impact detectors in this pipeline.

Initialize instance.

PARAMETER DESCRIPTION
config

Typed detector or analyzer configuration that controls thresholds. Default to None.

TYPE: AnalyzerConfig | None DEFAULT: None

pipeline_config

Optional pipeline overrides used to customize detector configuration. Default to None.

TYPE: 'PipelineConfig' | None DEFAULT: None

Source code in src/mcp_zen_of_languages/languages/rust/analyzer.py
def __init__(
    self,
    config: AnalyzerConfig | None = None,
    pipeline_config: PipelineConfig | None = None,
) -> None:
    """Initialize instance.

    Args:
        config (AnalyzerConfig | None, optional): Typed detector or analyzer configuration that controls thresholds. Default to None.
        pipeline_config ('PipelineConfig' | None, optional): Optional pipeline overrides used to customize detector configuration. Default to None.
    """
    self._pipeline_config = pipeline_config
    super().__init__(config=config)
Functions
default_config
default_config()

Return default analyzer configuration for this language.

RETURNS DESCRIPTION
AnalyzerConfig

Default analyzer settings for the current language implementation.

TYPE: AnalyzerConfig

Source code in src/mcp_zen_of_languages/languages/rust/analyzer.py
def default_config(self) -> AnalyzerConfig:
    """Return default analyzer configuration for this language.

    Returns:
        AnalyzerConfig: Default analyzer settings for the current language implementation.
    """
    return AnalyzerConfig()
language
language()

Return the analyzer language key.

RETURNS DESCRIPTION
str

Identifier string consumed by callers.

TYPE: str

Source code in src/mcp_zen_of_languages/languages/rust/analyzer.py
def language(self) -> str:
    """Return the analyzer language key.

    Returns:
        str: Identifier string consumed by callers.
    """
    return "rust"
capabilities
capabilities()

Declare support for use/mod/extern crate dependency extraction.

Source code in src/mcp_zen_of_languages/languages/rust/analyzer.py
def capabilities(self) -> AnalyzerCapabilities:
    """Declare support for use/mod/extern crate dependency extraction."""
    return AnalyzerCapabilities(supports_dependency_analysis=True)
parse_code
parse_code(_code)

Parse source text into a language parser result when available.

PARAMETER DESCRIPTION
_code

Source code text being parsed or analyzed.

TYPE: str

RETURNS DESCRIPTION
ParserResult | None

ParserResult | None: Normalized parser output, or None when parsing is unavailable.

Source code in src/mcp_zen_of_languages/languages/rust/analyzer.py
def parse_code(self, _code: str) -> ParserResult | None:
    """Parse source text into a language parser result when available.

    Args:
        _code (str): Source code text being parsed or analyzed.

    Returns:
        ParserResult | None: Normalized parser output, or ``None`` when parsing is unavailable.
    """
    return None
compute_metrics
compute_metrics(code, _ast_tree)

Compute complexity, maintainability, and line-count metrics.

PARAMETER DESCRIPTION
code

Source code text being parsed or analyzed.

TYPE: str

_ast_tree

Parsed syntax tree produced by the language parser, when available.

TYPE: ParserResult | None

RETURNS DESCRIPTION
tuple[CyclomaticSummary | None, float | None, int]

tuple[CyclomaticSummary | None, float | None, int]: Tuple containing computed metrics in analyzer-defined order.

Source code in src/mcp_zen_of_languages/languages/rust/analyzer.py
def compute_metrics(
    self,
    code: str,
    _ast_tree: ParserResult | None,
) -> tuple[CyclomaticSummary | None, float | None, int]:
    """Compute complexity, maintainability, and line-count metrics.

    Args:
        code (str): Source code text being parsed or analyzed.
        _ast_tree (ParserResult | None): Parsed syntax tree produced by the language parser, when available.

    Returns:
        tuple[CyclomaticSummary | None, float | None, int]: Tuple containing computed metrics in analyzer-defined order.
    """
    return None, None, len(code.splitlines())
build_pipeline
build_pipeline()

Build the detector pipeline for this analyzer.

RETURNS DESCRIPTION
DetectionPipeline

Pipeline instance used to run configured detectors.

TYPE: DetectionPipeline

Source code in src/mcp_zen_of_languages/languages/rust/analyzer.py
def build_pipeline(self) -> DetectionPipeline:
    """Build the detector pipeline for this analyzer.

    Returns:
        DetectionPipeline: Pipeline instance used to run configured detectors.
    """
    return super().build_pipeline()