Skip to content

Rules

mcp_zen_of_languages.rules.base_models

Canonical Pydantic models that define the shape of every zen principle.

This module is the single source of truth for the ZenPrinciple / LanguageZenPrinciples hierarchy. Language rule files (e.g. languages/python/rules.py) instantiate these models; analyzers, detectors, and the DetectionPipeline consume them.

Key concepts:

  • Each ZenPrinciple pairs a human-readable principle statement with machine-readable metrics, detectable_patterns, and violations that drive the detection pipeline.
  • Severity (1-10) controls violation priority and is persisted into every ViolationReport.
  • LanguageZenPrinciples groups all principles for a language together with provenance metadata (philosophy, source URL).

Classes

SeverityLevel

Bases: int, Enum

Numeric severity scale (1-10) used to rank zen violations.

Levels are grouped into four bands:

  • Informational (1-3): Style suggestions with no functional impact.
  • Warning (4-6): Likely maintainability or readability concerns.
  • Error (7-8): Strong anti-patterns that hinder correctness.
  • Critical (9-10): Severe violations requiring immediate attention.

PrincipleCategory

Bases: StrEnum

Taxonomy tag that groups related zen principles.

Analyzers and reporters use these categories to organise output (e.g. "show all readability violations") and to build per-category coverage reports. Each ZenPrinciple carries exactly one category.

ViolationSpec

Bases: BaseModel

A single concrete way in which a zen principle can be violated.

Principles may list multiple ViolationSpec entries — each describes one observable symptom. The id must be stable across releases so that suppression rules and audit trails remain valid.

ATTRIBUTE DESCRIPTION
id

Stable, slug-style identifier (e.g. "bare-except-clause").

TYPE: str

description

Human-readable explanation of the specific violation.

TYPE: str

ZenPrinciple

Bases: BaseModel

A single zen principle with its detection rules, metrics, and patterns.

This is the atomic unit of the zen rule system. Each principle carries enough metadata for the DetectionPipeline to decide what to detect (detectable_patterns, metrics) and how to report it (severity, recommended_alternative).

ATTRIBUTE DESCRIPTION
id

Globally unique identifier following {language}-{number} convention (e.g. "python-001").

TYPE: str

principle

The idiomatic best-practice statement, written as a short imperative sentence.

TYPE: str

category

Taxonomy tag for grouping related principles.

TYPE: PrincipleCategory

severity

Impact score on a 1-10 scale (9-10 is critical).

TYPE: int

description

Paragraph-length explanation of why this principle matters.

TYPE: str

violations

Concrete symptoms that indicate a breach of the principle. Accepts raw strings (auto-slugified) or ViolationSpec dicts.

TYPE: list[ViolationSpec | str]

detectable_patterns

Regex strings matched against source code by compiled_patterns.

TYPE: list[str] | None

metrics

Threshold key/value pairs consumed by detectors (e.g. {"max_nesting_depth": 3}).

TYPE: dict[str, Any] | None

recommended_alternative

Suggested refactoring or idiom to replace the violating pattern.

TYPE: str | None

required_config

Tool or linter settings that must be active for the principle to be enforceable.

TYPE: dict[str, Any] | None

See Also

LanguageZenPrinciples — the per-language collection that holds these principles.

Attributes
violation_specs property
violation_specs

Return the normalised ViolationSpec list for this principle.

RETURNS DESCRIPTION
list[ViolationSpec]

list[ViolationSpec]: Deduplicated violation specs, identical to reading violations

list[ViolationSpec]

after validator processing.

Functions
compiled_patterns
compiled_patterns()

Compile detectable_patterns into re.Pattern objects for reuse.

Invalid regex strings are silently escaped so they behave as literal substring matchers. Returns an empty list when no patterns are defined.

RETURNS DESCRIPTION
list[re.Pattern]

list['re.Pattern']: Compiled regex objects, one per entry in detectable_patterns.

Source code in src/mcp_zen_of_languages/rules/base_models.py
def compiled_patterns(self) -> list["re.Pattern"]:
    """Compile ``detectable_patterns`` into ``re.Pattern`` objects for reuse.

    Invalid regex strings are silently escaped so they behave as literal
    substring matchers.  Returns an empty list when no patterns are defined.

    Returns:
        list['re.Pattern']: Compiled regex objects, one per entry in ``detectable_patterns``.
    """
    import re

    patterns = self.detectable_patterns or []
    compiled: list[re.Pattern] = []
    for p in patterns:
        try:
            compiled.append(re.compile(p))
        except re.error:
            # Fall back to literal substring match by escaping
            compiled.append(re.compile(re.escape(p)))
    return compiled

LanguageZenPrinciples

Bases: BaseModel

The complete set of zen principles for a single programming language.

Each instance records provenance (source document, URL) and holds the ordered list of ZenPrinciple entries that analyzers iterate over.

ATTRIBUTE DESCRIPTION
language

Lowercase key used throughout the registry (e.g. "python").

TYPE: str

name

Display-friendly language name (e.g. "Python").

TYPE: str

philosophy

Name of the guiding document or philosophy (e.g. "The Zen of Python (PEP 20)").

TYPE: str

source_text

Title of the authoritative style guide.

TYPE: str

source_url

URL to the upstream style guide or specification.

TYPE: HttpUrl

principles

Ordered list of zen principles for this language.

TYPE: list[ZenPrinciple]

Attributes
principle_count property
principle_count

Total number of principles registered for this language.

RETURNS DESCRIPTION
int

Length of the principles list.

TYPE: int

Functions
get_by_id
get_by_id(principle_id)

Look up a principle by its unique ID (e.g. "python-003").

PARAMETER DESCRIPTION
principle_id

The ZenPrinciple.id to search for.

TYPE: str

RETURNS DESCRIPTION
ZenPrinciple | None

ZenPrinciple | None: The matching principle, or None if no principle has that ID.

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_by_id(self, principle_id: str) -> ZenPrinciple | None:
    """Look up a principle by its unique ID (e.g. ``"python-003"``).

    Args:
        principle_id (str): The ``ZenPrinciple.id`` to search for.

    Returns:
        ZenPrinciple | None: The matching principle, or ``None`` if no principle has that ID.
    """
    return next(
        (
            principle
            for principle in self.principles
            if principle.id == principle_id
        ),
        None,
    )
get_by_category
get_by_category(category)

Filter principles belonging to category.

PARAMETER DESCRIPTION
category

The PrincipleCategory to match against.

TYPE: PrincipleCategory

RETURNS DESCRIPTION
list[ZenPrinciple]

list[ZenPrinciple]: Principles whose category field equals category (may be empty).

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_by_category(self, category: PrincipleCategory) -> list[ZenPrinciple]:
    """Filter principles belonging to *category*.

    Args:
        category (PrincipleCategory): The ``PrincipleCategory`` to match against.

    Returns:
        list[ZenPrinciple]: Principles whose ``category`` field equals *category* (may be empty).
    """
    return [p for p in self.principles if p.category == category]
get_by_severity
get_by_severity(min_severity=7)

Return principles at or above min_severity.

PARAMETER DESCRIPTION
min_severity

Inclusive lower bound on the 1-10 severity scale. Default to 7.

TYPE: int DEFAULT: 7

RETURNS DESCRIPTION
list[ZenPrinciple]

list[ZenPrinciple]: Principles whose severity is ≥ min_severity.

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_by_severity(self, min_severity: int = 7) -> list[ZenPrinciple]:
    """Return principles at or above *min_severity*.

    Args:
        min_severity (int, optional): Inclusive lower bound on the 1-10 severity scale. Default to 7.

    Returns:
        list[ZenPrinciple]: Principles whose ``severity`` is ≥ *min_severity*.
    """
    return [p for p in self.principles if p.severity >= min_severity]

ViolationReport

Bases: BaseModel

Structured report emitted when a zen principle is violated.

Carries everything a reporter needs: the violated principle's identity, severity, location in source, a human-readable message, and an optional suggested fix.

ATTRIBUTE DESCRIPTION
principle_id

ID of the violated ZenPrinciple.

TYPE: str

principle_name

Display name / statement of the principle.

TYPE: str

severity

1-10 severity copied from the principle.

TYPE: int

category

Taxonomy tag of the violated principle.

TYPE: PrincipleCategory

location

Optional {"line": int, "column": int} pointing into the source file.

TYPE: dict[str, int] | None

message

One-line human-readable violation description.

TYPE: str

suggestion

Recommended refactoring to resolve the violation.

TYPE: str | None

code_snippet

Extract of the offending source code.

TYPE: str | None

AnalysisResult

Bases: BaseModel

Aggregate outcome of analysing a source file against zen principles.

Contains the full violation list, a normalized 0-100 score, and any metrics (complexity, LOC, etc.) computed during analysis.

ATTRIBUTE DESCRIPTION
language

Lowercase language key that was analysed.

TYPE: str

violations

Ordered list of ViolationReport entries.

TYPE: list[ViolationReport]

overall_score

Zen score where 100 means no violations.

TYPE: float

metrics

Free-form metric dictionary (keys vary by analyzer).

TYPE: dict[str, Any]

summary

Human-readable prose summarising the analysis.

TYPE: str

Attributes
critical_violations property
critical_violations

Subset of violations with severity ≥ 9 (critical band).

RETURNS DESCRIPTION
list[ViolationReport]

list[ViolationReport]: ViolationReport entries at the critical severity level.

violation_count property
violation_count

Total number of violations recorded in this result.

RETURNS DESCRIPTION
int

Length of the violations list.

TYPE: int

LanguageSummary

Bases: BaseModel

Lightweight snapshot of a language's presence in the registry.

Used by RegistryStats to avoid serialising the full principle list when only high-level metadata is needed.

ATTRIBUTE DESCRIPTION
name

Human-readable language name.

TYPE: str

principle_count

How many principles are registered.

TYPE: int

philosophy

Core philosophy or guiding document title.

TYPE: str

source_text

Name of the upstream style guide (optional).

TYPE: str | None

source_url

URL of the upstream style guide (optional).

TYPE: HttpUrl | None

DetectorConfig

Bases: BaseModel

Language-agnostic configuration bundle passed to individual detectors.

Built by RulesAdapter.get_detector_config or by the DetectionPipeline from ZenPrinciple.metrics. Detectors read thresholds and patterns from this model instead of inspecting raw principle objects, keeping them decoupled from the rule schema.

ATTRIBUTE DESCRIPTION
name

Detector identifier (e.g. "cyclomatic_complexity").

TYPE: str

thresholds

Numeric limits keyed by metric name (e.g. {"max_nesting_depth": 3.0}).

TYPE: dict[str, float]

patterns

Regex strings the detector should match against source.

TYPE: list[str]

metadata

Arbitrary non-numeric configuration values.

TYPE: dict[str, Any]

RegistryStats

Bases: BaseModel

Aggregated statistics across the entire ZEN_REGISTRY.

Constructed via the from_registry classmethod. Useful for dashboard rendering and health-check endpoints.

ATTRIBUTE DESCRIPTION
total_languages

Number of languages with registered principles.

TYPE: int

total_principles

Sum of principles across all languages.

TYPE: int

languages

Per-language LanguageSummary keyed by language identifier.

TYPE: dict[str, LanguageSummary]

Functions
from_registry classmethod
from_registry(registry)

Snapshot the live registry into a serialisable RegistryStats model.

PARAMETER DESCRIPTION
registry

The ZEN_REGISTRY mapping to summarise.

TYPE: dict[str, LanguageZenPrinciples]

RETURNS DESCRIPTION
RegistryStats

'RegistryStats': A fully populated RegistryStats instance.

Source code in src/mcp_zen_of_languages/rules/base_models.py
@classmethod
def from_registry(
    cls,
    registry: dict[str, "LanguageZenPrinciples"],
) -> "RegistryStats":
    """Snapshot the live registry into a serialisable ``RegistryStats`` model.

    Args:
        registry (dict[str, LanguageZenPrinciples]): The ``ZEN_REGISTRY`` mapping to summarise.

    Returns:
        'RegistryStats': A fully populated ``RegistryStats`` instance.
    """
    total_languages = len(registry)
    total_principles = sum(lang.principle_count for lang in registry.values())

    languages: dict[str, LanguageSummary] = {
        key: LanguageSummary(
            name=lang.name,
            principle_count=lang.principle_count,
            philosophy=lang.philosophy,
            source_text=getattr(lang, "source_text", None),
            source_url=getattr(lang, "source_url", None),
        )
        for key, lang in registry.items()
    }
    return cls(
        total_languages=total_languages,
        total_principles=total_principles,
        languages=languages,
    )

Functions

get_number_of_principles

get_number_of_principles(language)

Count principles defined for a single language.

PARAMETER DESCRIPTION
language

The LanguageZenPrinciples instance to inspect.

TYPE: LanguageZenPrinciples

RETURNS DESCRIPTION
int

Number of ZenPrinciple entries in language.

TYPE: int

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_number_of_principles(language: LanguageZenPrinciples) -> int:
    """Count principles defined for a single language.

    Args:
        language (LanguageZenPrinciples): The ``LanguageZenPrinciples`` instance to inspect.

    Returns:
        int: Number of ``ZenPrinciple`` entries in *language*.
    """
    return language.principle_count

get_number_of_priniciple

get_number_of_priniciple(language)

Backward-compatible alias for get_number_of_principles (typo preserved).

PARAMETER DESCRIPTION
language

Delegated to get_number_of_principles.

TYPE: LanguageZenPrinciples

RETURNS DESCRIPTION
int

Same as get_number_of_principles.

TYPE: int

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_number_of_priniciple(language: LanguageZenPrinciples) -> int:
    """Backward-compatible alias for ``get_number_of_principles`` (typo preserved).

    Args:
        language (LanguageZenPrinciples): Delegated to ``get_number_of_principles``.

    Returns:
        int: Same as ``get_number_of_principles``.
    """
    return get_number_of_principles(language)

get_rule_ids

get_rule_ids(language)

Collect the unique ZenPrinciple.id values for a language.

PARAMETER DESCRIPTION
language

The LanguageZenPrinciples to extract IDs from.

TYPE: LanguageZenPrinciples

RETURNS DESCRIPTION
set[str]

set[str]: Set of principle IDs (e.g. {"python-001", "python-002", …}).

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_rule_ids(language: LanguageZenPrinciples) -> set[str]:
    """Collect the unique ``ZenPrinciple.id`` values for a language.

    Args:
        language (LanguageZenPrinciples): The ``LanguageZenPrinciples`` to extract IDs from.

    Returns:
        set[str]: Set of principle IDs (e.g. ``{"python-001", "python-002", …}``).
    """
    return {principle.id for principle in language.principles}

get_total_principles

get_total_principles(registry)

Sum principle counts across every language in registry.

PARAMETER DESCRIPTION
registry

The ZEN_REGISTRY mapping (language key → principles).

TYPE: dict[str, LanguageZenPrinciples]

RETURNS DESCRIPTION
int

Aggregate principle count.

TYPE: int

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_total_principles(
    registry: dict[str, LanguageZenPrinciples],
) -> int:
    """Sum principle counts across every language in *registry*.

    Args:
        registry (dict[str, LanguageZenPrinciples]): The ``ZEN_REGISTRY`` mapping (language key → principles).

    Returns:
        int: Aggregate principle count.
    """
    return sum(language.principle_count for language in registry.values())

get_missing_detector_rules

get_missing_detector_rules(language, *, explicit_only=True)

Identify principle IDs that have no dedicated detector registered.

When explicit_only is True (the default), the generic RulePatternDetector fallback is ignored — only purpose-built detectors count as coverage.

PARAMETER DESCRIPTION
language

The LanguageZenPrinciples whose coverage is checked.

TYPE: LanguageZenPrinciples

explicit_only

If True, exclude the catch-all pattern detector from the coverage calculation. Default to True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
list[str]

list[str]: Sorted list of principle IDs without detector coverage.

See Also

get_rule_id_coverage — also reports unknown detector mappings.

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_missing_detector_rules(
    language: LanguageZenPrinciples,
    *,
    explicit_only: bool = True,
) -> list[str]:
    """Identify principle IDs that have no dedicated detector registered.

    When *explicit_only* is ``True`` (the default), the generic
    ``RulePatternDetector`` fallback is ignored — only purpose-built
    detectors count as coverage.

    Args:
        language (LanguageZenPrinciples): The ``LanguageZenPrinciples`` whose coverage is checked.
        explicit_only (bool, optional): If ``True``, exclude the catch-all pattern detector
            from the coverage calculation. Default to True.

    Returns:
        list[str]: Sorted list of principle IDs without detector coverage.

    See Also:
        ``get_rule_id_coverage`` — also reports *unknown* detector mappings.
    """
    from mcp_zen_of_languages.analyzers import registry_bootstrap  # noqa: F401
    from mcp_zen_of_languages.analyzers.registry import REGISTRY
    from mcp_zen_of_languages.languages.rule_pattern import RulePatternDetector

    missing: list[str] = []
    for principle in language.principles:
        metas = REGISTRY.detectors_for_rule(principle.id, language.language)
        if explicit_only:
            metas = [
                meta for meta in metas if meta.detector_class is not RulePatternDetector
            ]
        if not metas:
            missing.append(principle.id)
    return missing

get_rule_id_coverage

get_rule_id_coverage(language, *, explicit_only=True)

Compute bidirectional coverage between principles and detectors.

Returns two lists:

  • missing — principle IDs with no mapped detector.
  • unknown — detector rule-IDs that reference principles not defined in the canonical rule set (potential stale registrations).
PARAMETER DESCRIPTION
language

The LanguageZenPrinciples to audit.

TYPE: LanguageZenPrinciples

explicit_only

When True, ignore RulePatternDetector fallbacks. Default to True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
tuple[list[str], list[str]]

tuple[list[str], list[str]]: (missing, unknown) — both lists sorted alphabetically.

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_rule_id_coverage(
    language: LanguageZenPrinciples,
    *,
    explicit_only: bool = True,
) -> tuple[list[str], list[str]]:
    """Compute bidirectional coverage between principles and detectors.

    Returns two lists:

    * **missing** — principle IDs with no mapped detector.
    * **unknown** — detector rule-IDs that reference principles not defined
      in the canonical rule set (potential stale registrations).

    Args:
        language (LanguageZenPrinciples): The ``LanguageZenPrinciples`` to audit.
        explicit_only (bool, optional): When ``True``, ignore ``RulePatternDetector`` fallbacks. Default to True.

    Returns:
        tuple[list[str], list[str]]: ``(missing, unknown)`` — both lists sorted alphabetically.
    """
    from mcp_zen_of_languages.analyzers import registry_bootstrap  # noqa: F401
    from mcp_zen_of_languages.analyzers.registry import REGISTRY
    from mcp_zen_of_languages.languages.rule_pattern import RulePatternDetector

    expected = get_rule_ids(language)
    mapped: set[str] = set()
    for meta in REGISTRY.items():
        if meta.language != language.language:
            continue
        if explicit_only and meta.detector_class is RulePatternDetector:
            continue
        mapped.update(meta.rule_ids)
    missing = sorted(expected - mapped)
    unknown = sorted(mapped - expected)
    return missing, unknown

get_registry_rule_id_gaps

get_registry_rule_id_gaps(registry, *, explicit_only=True)

Run get_rule_id_coverage across every language and collect gaps.

Languages with no gaps are omitted from the result.

PARAMETER DESCRIPTION
registry

Full ZEN_REGISTRY mapping.

TYPE: dict[str, LanguageZenPrinciples]

explicit_only

Passed through to get_rule_id_coverage. Default to True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
dict[str, dict[str, list[str]]]

dict[str, dict[str, list[str]]]: {language: {"missing": [...], "unknown": [...]}} for each

dict[str, dict[str, list[str]]]

language that has at least one gap.

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_registry_rule_id_gaps(
    registry: dict[str, LanguageZenPrinciples],
    *,
    explicit_only: bool = True,
) -> dict[str, dict[str, list[str]]]:
    """Run ``get_rule_id_coverage`` across every language and collect gaps.

    Languages with no gaps are omitted from the result.

    Args:
        registry (dict[str, LanguageZenPrinciples]): Full ``ZEN_REGISTRY`` mapping.
        explicit_only (bool, optional): Passed through to ``get_rule_id_coverage``. Default to True.

    Returns:
        dict[str, dict[str, list[str]]]: ``{language: {"missing": [...], "unknown": [...]}}`` for each
        language that has at least one gap.
    """
    gaps: dict[str, dict[str, list[str]]] = {}
    for language in registry.values():
        missing, unknown = get_rule_id_coverage(language, explicit_only=explicit_only)
        if missing or unknown:
            gaps[language.language] = {"missing": missing, "unknown": unknown}
    return gaps

get_registry_detector_gaps

get_registry_detector_gaps(registry, *, explicit_only=True)

Return principle IDs lacking a dedicated detector for each language.

This is a convenience wrapper around get_missing_detector_rules applied to every language in registry.

PARAMETER DESCRIPTION
registry

Full ZEN_REGISTRY mapping.

TYPE: dict[str, LanguageZenPrinciples]

explicit_only

Passed through to get_missing_detector_rules. Default to True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
dict[str, list[str]]

dict[str, list[str]]: {language: [rule_id, …]} for each language with missing detectors.

Source code in src/mcp_zen_of_languages/rules/base_models.py
def get_registry_detector_gaps(
    registry: dict[str, LanguageZenPrinciples],
    *,
    explicit_only: bool = True,
) -> dict[str, list[str]]:
    """Return principle IDs lacking a dedicated detector for each language.

    This is a convenience wrapper around ``get_missing_detector_rules``
    applied to every language in *registry*.

    Args:
        registry (dict[str, LanguageZenPrinciples]): Full ``ZEN_REGISTRY`` mapping.
        explicit_only (bool, optional): Passed through to ``get_missing_detector_rules``. Default to True.

    Returns:
        dict[str, list[str]]: ``{language: [rule_id, …]}`` for each language with missing detectors.
    """
    gaps: dict[str, list[str]] = {}
    for language in registry.values():
        if missing := get_missing_detector_rules(language, explicit_only=explicit_only):
            gaps[language.language] = missing
    return gaps

mcp_zen_of_languages.rules.coverage

Rule-to-detector coverage analysis.

Provides functions and Pydantic models that answer the question: "Does every zen principle have at least one detector that can enforce it?"

Two granularity levels are supported:

  • Rule coverage (RuleCoverageMap) — maps principle IDs to detector IDs.
  • Config coverage (RuleConfigCoverageMap) — maps principle IDs to DetectorConfig classes, useful for verifying that the pipeline can build a valid config for every registered detector.

Each function comes in inclusive and explicit variants. Inclusive variants count the generic RulePatternDetector fallback; explicit variants require a purpose-built detector and raise ValueError when one is missing.

Classes

RuleCoverageMap

Bases: BaseModel

Maps each principle ID to the detector IDs registered for it.

ATTRIBUTE DESCRIPTION
language

Lowercase language key (e.g. "python").

TYPE: str

rules

{principle_id: [detector_id, …]} — empty lists indicate uncovered principles.

TYPE: dict[str, list[str]]

Functions
detector_counts
detector_counts()

Return the number of detectors backing each principle.

RETURNS DESCRIPTION
dict[str, int]

dict[str, int]: {principle_id: count} — zero means the rule is uncovered.

Source code in src/mcp_zen_of_languages/rules/coverage.py
def detector_counts(self) -> dict[str, int]:
    """Return the number of detectors backing each principle.

    Returns:
        dict[str, int]: ``{principle_id: count}`` — zero means the rule is uncovered.
    """
    return {rule_id: len(detectors) for rule_id, detectors in self.rules.items()}

RuleConfigCoverageMap

Bases: BaseModel

Maps each principle ID to the DetectorConfig subclasses that serve it.

ATTRIBUTE DESCRIPTION
language

Lowercase language key.

TYPE: str

rules

{principle_id: [ConfigClass, …]} — used to verify that every detector can be instantiated with a valid config model.

TYPE: dict[str, list[type[DetectorConfig]]]

Functions
config_counts
config_counts()

Return the number of distinct config classes per principle.

RETURNS DESCRIPTION
dict[str, int]

dict[str, int]: {principle_id: count} — zero means no config class is bound.

Source code in src/mcp_zen_of_languages/rules/coverage.py
def config_counts(self) -> dict[str, int]:
    """Return the number of distinct config classes per principle.

    Returns:
        dict[str, int]: ``{principle_id: count}`` — zero means no config class is bound.
    """
    return {rule_id: len(configs) for rule_id, configs in self.rules.items()}

Functions

build_rule_coverage

build_rule_coverage(language)

Build an inclusive rule-to-detector map for language.

Includes RulePatternDetector fallback registrations alongside purpose-built detectors.

PARAMETER DESCRIPTION
language

Lowercase language key (e.g. "python").

TYPE: str

RETURNS DESCRIPTION
RuleCoverageMap

A RuleCoverageMap covering every principle defined for language.

TYPE: RuleCoverageMap

RAISES DESCRIPTION
ValueError

If language is not present in the registry.

Source code in src/mcp_zen_of_languages/rules/coverage.py
def build_rule_coverage(language: str) -> RuleCoverageMap:
    """Build an inclusive rule-to-detector map for *language*.

    Includes ``RulePatternDetector`` fallback registrations alongside
    purpose-built detectors.

    Args:
        language (str): Lowercase language key (e.g. ``"python"``).

    Returns:
        RuleCoverageMap: A ``RuleCoverageMap`` covering every principle defined for *language*.

    Raises:
        ValueError: If *language* is not present in the registry.
    """
    from mcp_zen_of_languages.analyzers import registry_bootstrap  # noqa: F401
    from mcp_zen_of_languages.analyzers.registry import REGISTRY

    lang_zen = get_language_zen(language)
    if lang_zen is None:
        msg = f"Unknown language: {language}"
        raise ValueError(msg)
    rules: dict[str, list[str]] = {}
    for principle in lang_zen.principles:
        metas = REGISTRY.detectors_for_rule(principle.id, language)
        detector_ids = sorted({meta.detector_id for meta in metas})
        rules[principle.id] = detector_ids
    return RuleCoverageMap(language=language, rules=rules)

build_explicit_rule_coverage

build_explicit_rule_coverage(language)

Build a strict rule-to-detector map excluding RulePatternDetector fallbacks.

Raises ValueError if any principle lacks a purpose-built detector.

PARAMETER DESCRIPTION
language

Lowercase language key.

TYPE: str

RETURNS DESCRIPTION
RuleCoverageMap

A RuleCoverageMap containing only explicitly registered detectors.

TYPE: RuleCoverageMap

RAISES DESCRIPTION
ValueError

If language is unknown or any principle is uncovered.

Source code in src/mcp_zen_of_languages/rules/coverage.py
def build_explicit_rule_coverage(language: str) -> RuleCoverageMap:
    """Build a strict rule-to-detector map excluding ``RulePatternDetector`` fallbacks.

    Raises ``ValueError`` if any principle lacks a purpose-built detector.

    Args:
        language (str): Lowercase language key.

    Returns:
        RuleCoverageMap: A ``RuleCoverageMap`` containing only explicitly registered detectors.

    Raises:
        ValueError: If *language* is unknown or any principle is uncovered.
    """
    from mcp_zen_of_languages.analyzers import registry_bootstrap  # noqa: F401
    from mcp_zen_of_languages.analyzers.registry import REGISTRY

    lang_zen = get_language_zen(language)
    if lang_zen is None:
        msg = f"Unknown language: {language}"
        raise ValueError(msg)
    rules: dict[str, list[str]] = {}
    for principle in lang_zen.principles:
        metas = REGISTRY.detectors_for_rule(principle.id, language)
        if detector_ids := sorted(
            {
                meta.detector_id
                for meta in metas
                if meta.detector_class is not RulePatternDetector
            },
        ):
            rules[principle.id] = detector_ids
        else:
            msg = f"Explicit coverage missing for {language} rule {principle.id}"
            raise ValueError(msg)
    return RuleCoverageMap(language=language, rules=rules)

build_rule_config_coverage

build_rule_config_coverage(language)

Build an inclusive rule-to-config-class map for language.

PARAMETER DESCRIPTION
language

Lowercase language key.

TYPE: str

RETURNS DESCRIPTION
RuleConfigCoverageMap

A RuleConfigCoverageMap listing every DetectorConfig subclass

TYPE: RuleConfigCoverageMap

RuleConfigCoverageMap

serving each principle.

RAISES DESCRIPTION
ValueError

If language is not present in the registry.

Source code in src/mcp_zen_of_languages/rules/coverage.py
def build_rule_config_coverage(language: str) -> RuleConfigCoverageMap:
    """Build an inclusive rule-to-config-class map for *language*.

    Args:
        language (str): Lowercase language key.

    Returns:
        RuleConfigCoverageMap: A ``RuleConfigCoverageMap`` listing every ``DetectorConfig`` subclass
        serving each principle.

    Raises:
        ValueError: If *language* is not present in the registry.
    """
    from mcp_zen_of_languages.analyzers import registry_bootstrap  # noqa: F401
    from mcp_zen_of_languages.analyzers.registry import REGISTRY

    lang_zen = get_language_zen(language)
    if lang_zen is None:
        msg = f"Unknown language: {language}"
        raise ValueError(msg)
    rules: dict[str, list[type[DetectorConfig]]] = {}
    for principle in lang_zen.principles:
        metas = REGISTRY.detectors_for_rule(principle.id, language)
        ordered_metas = sorted(metas, key=lambda meta: meta.detector_id)
        configs: list[type[DetectorConfig]] = []
        seen: set[type[DetectorConfig]] = set()
        for meta in ordered_metas:
            config_model = meta.config_model
            if config_model in seen:
                continue
            seen.add(config_model)
            configs.append(config_model)
        rules[principle.id] = configs
    return RuleConfigCoverageMap(language=language, rules=rules)

build_explicit_rule_config_coverage

build_explicit_rule_config_coverage(language)

Build a strict rule-to-config-class map excluding RulePatternDetector fallbacks.

Raises ValueError if any principle lacks an explicit config class.

PARAMETER DESCRIPTION
language

Lowercase language key.

TYPE: str

RETURNS DESCRIPTION
RuleConfigCoverageMap

A RuleConfigCoverageMap containing only explicitly registered configs.

TYPE: RuleConfigCoverageMap

RAISES DESCRIPTION
ValueError

If language is unknown or any principle is uncovered.

Source code in src/mcp_zen_of_languages/rules/coverage.py
def build_explicit_rule_config_coverage(language: str) -> RuleConfigCoverageMap:
    """Build a strict rule-to-config-class map excluding ``RulePatternDetector`` fallbacks.

    Raises ``ValueError`` if any principle lacks an explicit config class.

    Args:
        language (str): Lowercase language key.

    Returns:
        RuleConfigCoverageMap: A ``RuleConfigCoverageMap`` containing only explicitly registered configs.

    Raises:
        ValueError: If *language* is unknown or any principle is uncovered.
    """
    from mcp_zen_of_languages.analyzers import registry_bootstrap  # noqa: F401
    from mcp_zen_of_languages.analyzers.registry import REGISTRY

    lang_zen = get_language_zen(language)
    if lang_zen is None:
        msg = f"Unknown language: {language}"
        raise ValueError(msg)
    rules: dict[str, list[type[DetectorConfig]]] = {}
    for principle in lang_zen.principles:
        metas = REGISTRY.detectors_for_rule(principle.id, language)
        ordered_metas = sorted(
            (meta for meta in metas if meta.detector_class is not RulePatternDetector),
            key=lambda meta: meta.detector_id,
        )
        configs: list[type[DetectorConfig]] = []
        seen: set[type[DetectorConfig]] = set()
        for meta in ordered_metas:
            config_model = meta.config_model
            if config_model in seen:
                continue
            seen.add(config_model)
            configs.append(config_model)
        if not configs:
            msg = f"Explicit config coverage missing for {language} rule {principle.id}"
            raise ValueError(msg)
        rules[principle.id] = configs
    return RuleConfigCoverageMap(language=language, rules=rules)

build_all_rule_coverage

build_all_rule_coverage(languages=None)

Build inclusive rule coverage maps for all (or selected) languages.

PARAMETER DESCRIPTION
languages

Restrict to these language keys. None means all languages in the registry. Default to None.

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
list[RuleCoverageMap]

list[RuleCoverageMap]: One RuleCoverageMap per language.

Source code in src/mcp_zen_of_languages/rules/coverage.py
def build_all_rule_coverage(
    languages: list[str] | None = None,
) -> list[RuleCoverageMap]:
    """Build inclusive rule coverage maps for all (or selected) languages.

    Args:
        languages (list[str] | None, optional): Restrict to these language keys.  ``None`` means all
            languages in the registry. Default to None.

    Returns:
        list[RuleCoverageMap]: One ``RuleCoverageMap`` per language.
    """
    langs = languages or get_all_languages()
    return [build_rule_coverage(lang) for lang in langs]

build_all_explicit_rule_coverage

build_all_explicit_rule_coverage(languages=None)

Build strict rule coverage maps (no fallback) for all (or selected) languages.

PARAMETER DESCRIPTION
languages

Restrict to these language keys. None means all. Default to None.

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
list[RuleCoverageMap]

list[RuleCoverageMap]: One RuleCoverageMap per language (raises on gaps).

Source code in src/mcp_zen_of_languages/rules/coverage.py
def build_all_explicit_rule_coverage(
    languages: list[str] | None = None,
) -> list[RuleCoverageMap]:
    """Build strict rule coverage maps (no fallback) for all (or selected) languages.

    Args:
        languages (list[str] | None, optional): Restrict to these language keys.  ``None`` means all. Default to None.

    Returns:
        list[RuleCoverageMap]: One ``RuleCoverageMap`` per language (raises on gaps).
    """
    langs = languages or get_all_languages()
    return [build_explicit_rule_coverage(lang) for lang in langs]

build_all_rule_config_coverage

build_all_rule_config_coverage(languages=None)

Build inclusive config coverage maps for all (or selected) languages.

PARAMETER DESCRIPTION
languages

Restrict to these language keys. None means all. Default to None.

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
list[RuleConfigCoverageMap]

list[RuleConfigCoverageMap]: One RuleConfigCoverageMap per language.

Source code in src/mcp_zen_of_languages/rules/coverage.py
def build_all_rule_config_coverage(
    languages: list[str] | None = None,
) -> list[RuleConfigCoverageMap]:
    """Build inclusive config coverage maps for all (or selected) languages.

    Args:
        languages (list[str] | None, optional): Restrict to these language keys.  ``None`` means all. Default to None.

    Returns:
        list[RuleConfigCoverageMap]: One ``RuleConfigCoverageMap`` per language.
    """
    langs = languages or get_all_languages()
    return [build_rule_config_coverage(lang) for lang in langs]

build_all_explicit_rule_config_coverage

build_all_explicit_rule_config_coverage(languages=None)

Build strict config coverage maps (no fallback) for all (or selected) languages.

PARAMETER DESCRIPTION
languages

Restrict to these language keys. None means all. Default to None.

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
list[RuleConfigCoverageMap]

list[RuleConfigCoverageMap]: One RuleConfigCoverageMap per language (raises on gaps).

Source code in src/mcp_zen_of_languages/rules/coverage.py
def build_all_explicit_rule_config_coverage(
    languages: list[str] | None = None,
) -> list[RuleConfigCoverageMap]:
    """Build strict config coverage maps (no fallback) for all (or selected) languages.

    Args:
        languages (list[str] | None, optional): Restrict to these language keys.  ``None`` means all. Default to None.

    Returns:
        list[RuleConfigCoverageMap]: One ``RuleConfigCoverageMap`` per language (raises on gaps).
    """
    langs = languages or get_all_languages()
    return [build_explicit_rule_config_coverage(lang) for lang in langs]

mcp_zen_of_languages.rules.detections

Compatibility shim: re-export detection helpers from rules.tools.detections.

Legacy import path: mcp_zen_of_languages.rules.detections New canonical path: mcp_zen_of_languages.rules.tools.detections

Classes

DuplicateFinding

Bases: BaseModel

A function name that appears in more than one file.

ATTRIBUTE DESCRIPTION
name

Function name found in multiple files.

TYPE: str

files

Paths of the files containing the duplicate definition.

TYPE: list[str]

GodClassFinding

Bases: BaseModel

A class that exceeds method-count or line-count thresholds.

ATTRIBUTE DESCRIPTION
name

Class name.

TYPE: str

method_count

Number of def statements inside the class.

TYPE: int

lines

Total line span of the class body.

TYPE: int

InheritanceFinding

Bases: BaseModel

An inheritance chain that exceeds the allowed depth.

ATTRIBUTE DESCRIPTION
chain

Ordered list of class names from child to deepest ancestor.

TYPE: list[str]

depth

Number of inheritance hops (len(chain) - 1).

TYPE: int

DependencyCycleFinding

Bases: BaseModel

A circular dependency path discovered in the import graph.

ATTRIBUTE DESCRIPTION
cycle

Module names forming the cycle, ending with a repeat of the first element.

TYPE: list[str]

FeatureEnvyFinding

Bases: BaseModel

A method that accesses another object's attributes more than its own.

ATTRIBUTE DESCRIPTION
method

Name of the envious method ("<unknown>" when file-level).

TYPE: str

target_class

Most-referenced external object name.

TYPE: str

occurrences

Number of attribute accesses on target_class.

TYPE: int

SparseCodeFinding

Bases: BaseModel

A source line containing too many semicolon-separated statements.

ATTRIBUTE DESCRIPTION
line

1-based line number.

TYPE: int

statements

Number of statements found on the line.

TYPE: int

ConsistencyFinding

Bases: BaseModel

Mixed naming conventions detected among function definitions.

ATTRIBUTE DESCRIPTION
naming_styles

Sorted list of observed styles (e.g. ["camelCase", "snake_case"]).

TYPE: list[str]

ExplicitnessFinding

Bases: BaseModel

A function with one or more parameters lacking type annotations.

ATTRIBUTE DESCRIPTION
function

Name of the under-annotated function.

TYPE: str

missing_params

Parameter names without type hints.

TYPE: list[str]

NamespaceFinding

Bases: BaseModel

Namespace pollution metrics for a single source file.

ATTRIBUTE DESCRIPTION
top_level_symbols

Count of top-level definitions, imports, and assignments.

TYPE: int

export_count

Number of entries in __all__, or None when __all__ is not defined.

TYPE: int | None

TsAnyFinding

Bases: BaseModel

Occurrences of the any keyword in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Total number of any keyword matches.

TYPE: int

TsTypeAliasFinding

Bases: BaseModel

Object-style type aliases (type X = { … }) found in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of object-literal type aliases.

TYPE: int

TsReturnTypeFinding

Bases: BaseModel

Exported functions missing explicit return-type annotations.

ATTRIBUTE DESCRIPTION
count

Number of exported functions without a return type.

TYPE: int

TsReadonlyFinding

Bases: BaseModel

Occurrences of the readonly modifier in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of readonly keyword matches.

TYPE: int

TsAssertionFinding

Bases: BaseModel

Type-assertion expressions (as T) found in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of as type-assertion matches.

TYPE: int

TsUtilityTypeFinding

Bases: BaseModel

Built-in utility-type references (Partial, Pick, etc.) in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Combined occurrences of Partial, Pick, Omit, Record, and Readonly.

TYPE: int

TsNonNullFinding

Bases: BaseModel

Non-null assertion operators (!) found in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of non-null assertion matches.

TYPE: int

TsEnumObjectFinding

Bases: BaseModel

const object literals that may serve as ad-hoc enums.

ATTRIBUTE DESCRIPTION
count

Number of const X = { patterns found.

TYPE: int

TsUnknownAnyFinding

Bases: BaseModel

Comparative counts of any vs unknown keywords in TypeScript source.

ATTRIBUTE DESCRIPTION
any_count

Occurrences of the any keyword.

TYPE: int

unknown_count

Occurrences of the unknown keyword.

TYPE: int

TsOptionalChainingFinding

Bases: BaseModel

Manual null-check chains in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of manual null-check chain patterns found.

TYPE: int

TsIndexLoopFinding

Bases: BaseModel

C-style index-based for loops in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of index-based loop patterns found.

TYPE: int

TsPromiseChainFinding

Bases: BaseModel

Raw .then() promise chains in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of .then() call patterns found.

TYPE: int

TsDefaultExportFinding

Bases: BaseModel

export default statements in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of export default patterns found.

TYPE: int

TsCatchAllTypeFinding

Bases: BaseModel

Catch-all type annotations (Object, object, {}) in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of catch-all type annotation patterns found.

TYPE: int

TsConsoleUsageFinding

Bases: BaseModel

console.* calls in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of console.* call patterns found.

TYPE: int

TsRequireImportFinding

Bases: BaseModel

require() calls in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of require() call patterns found.

TYPE: int

TsStringConcatFinding

Bases: BaseModel

String concatenation patterns in TypeScript source.

ATTRIBUTE DESCRIPTION
count

Number of string concatenation patterns found.

TYPE: int

Functions

detect_ts_optional_chaining

detect_ts_optional_chaining(code)

Detect manual null-check chains replaceable by optional chaining.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsOptionalChainingFinding

Detection result.

TYPE: TsOptionalChainingFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_optional_chaining(code: str) -> TsOptionalChainingFinding:
    """Detect manual null-check chains replaceable by optional chaining.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsOptionalChainingFinding: Detection result.
    """
    pattern = re.compile(r"\b\w+\s*&&\s*\w+\.\w+\s*&&\s*\w+\.\w+\.\w+")
    return TsOptionalChainingFinding(count=len(pattern.findall(code)))

detect_ts_index_loops

detect_ts_index_loops(code)

Detect C-style index-based for loops.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsIndexLoopFinding

Detection result.

TYPE: TsIndexLoopFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_index_loops(code: str) -> TsIndexLoopFinding:
    """Detect C-style index-based for loops.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsIndexLoopFinding: Detection result.
    """
    pattern = re.compile(r"for\s*\(\s*let\s+\w+\s*=\s*0\s*;\s*\w+\s*<")
    return TsIndexLoopFinding(count=len(pattern.findall(code)))

detect_ts_promise_chains

detect_ts_promise_chains(code)

Detect raw .then() promise chains.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsPromiseChainFinding

Detection result.

TYPE: TsPromiseChainFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_promise_chains(code: str) -> TsPromiseChainFinding:
    """Detect raw .then() promise chains.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsPromiseChainFinding: Detection result.
    """
    pattern = re.compile(r"\.then\s*\(")
    return TsPromiseChainFinding(count=len(pattern.findall(code)))

detect_ts_default_exports

detect_ts_default_exports(code)

Detect export default statements.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsDefaultExportFinding

Detection result.

TYPE: TsDefaultExportFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_default_exports(code: str) -> TsDefaultExportFinding:
    """Detect export default statements.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsDefaultExportFinding: Detection result.
    """
    pattern = re.compile(r"\bexport\s+default\b")
    return TsDefaultExportFinding(count=len(pattern.findall(code)))

detect_ts_catch_all_types

detect_ts_catch_all_types(code)

Detect catch-all type annotations.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsCatchAllTypeFinding

Detection result.

TYPE: TsCatchAllTypeFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_catch_all_types(code: str) -> TsCatchAllTypeFinding:
    """Detect catch-all type annotations.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsCatchAllTypeFinding: Detection result.
    """
    pattern = re.compile(r":\s*(?:[Oo]bject\b|\{\s*\})")
    return TsCatchAllTypeFinding(count=len(pattern.findall(code)))

detect_ts_console_usage

detect_ts_console_usage(code)

Detect console.* calls.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsConsoleUsageFinding

Detection result.

TYPE: TsConsoleUsageFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_console_usage(code: str) -> TsConsoleUsageFinding:
    """Detect console.* calls.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsConsoleUsageFinding: Detection result.
    """
    pattern = re.compile(r"\bconsole\.\w+\s*\(")
    return TsConsoleUsageFinding(count=len(pattern.findall(code)))

detect_ts_require_imports

detect_ts_require_imports(code)

Detect require() calls.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsRequireImportFinding

Detection result.

TYPE: TsRequireImportFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_require_imports(code: str) -> TsRequireImportFinding:
    """Detect require() calls.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsRequireImportFinding: Detection result.
    """
    pattern = re.compile(r"\brequire\s*\(")
    return TsRequireImportFinding(count=len(pattern.findall(code)))

detect_ts_string_concats

detect_ts_string_concats(code)

Detect string concatenation patterns.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsStringConcatFinding

Detection result.

TYPE: TsStringConcatFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_string_concats(code: str) -> TsStringConcatFinding:
    """Detect string concatenation patterns.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsStringConcatFinding: Detection result.
    """
    pattern = re.compile(r"""["'][^"']*["']\s*\+\s*\w+""")
    return TsStringConcatFinding(count=len(pattern.findall(code)))

detect_deep_nesting

detect_deep_nesting(code, max_depth=3)

Measure the deepest indentation level using 4-space tab stops.

PARAMETER DESCRIPTION
code

Python source text to scan.

TYPE: str

max_depth

Nesting-depth ceiling used for the boolean flag. Default to 3.

TYPE: int DEFAULT: 3

RETURNS DESCRIPTION
bool

tuple[bool, int]: (exceeds_threshold, deepest_level) — the boolean is True

int

when deepest_level is strictly greater than max_depth.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_deep_nesting(code: str, max_depth: int = 3) -> tuple[bool, int]:
    """Measure the deepest indentation level using 4-space tab stops.

    Args:
        code (str): Python source text to scan.
        max_depth (int, optional): Nesting-depth ceiling used for the boolean flag. Default to 3.

    Returns:
        tuple[bool, int]: ``(exceeds_threshold, deepest_level)`` — the boolean is ``True``
        when *deepest_level* is strictly greater than *max_depth*.
    """
    max_found = 0
    for line in code.splitlines():
        stripped = line.lstrip("\t ")
        depth = (len(line) - len(stripped)) // 4
        max_found = max(max_found, depth)
    return (max_found > max_depth, max_found)

detect_long_functions

detect_long_functions(code, max_lines=50)

Find top-level def blocks that exceed max_lines.

Uses naive line counting: a new def at column 0 starts a new block and terminates the previous one.

PARAMETER DESCRIPTION
code

Python source text.

TYPE: str

max_lines

Line-count ceiling. Blocks at or below this are ignored. Default to 50.

TYPE: int DEFAULT: 50

RETURNS DESCRIPTION
list[tuple[str, int]]

list[tuple[str, int]]: [(function_header, line_count), …] for each over-long function.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_long_functions(code: str, max_lines: int = 50) -> list[tuple[str, int]]:
    """Find top-level ``def`` blocks that exceed *max_lines*.

    Uses naive line counting: a new ``def`` at column 0 starts a new block
    and terminates the previous one.

    Args:
        code (str): Python source text.
        max_lines (int, optional): Line-count ceiling.  Blocks at or below this are ignored. Default to 50.

    Returns:
        list[tuple[str, int]]: ``[(function_header, line_count), …]`` for each over-long function.
    """
    funcs: list[tuple[str, int]] = []
    current_name = None
    current_count = 0
    for line in code.splitlines():
        if re.match(r"^def\s+\w+", line):
            if current_name and current_count > max_lines:
                funcs.append((current_name, current_count))
            current_name = line.strip()
            current_count = 1
        elif current_name:
            current_count += 1
    if current_name and current_count > max_lines:
        funcs.append((current_name, current_count))
    return funcs

detect_magic_methods_overuse

detect_magic_methods_overuse(code)

Collect all dunder-method definitions (def __xxx__) in code.

PARAMETER DESCRIPTION
code

Python source text to scan.

TYPE: str

RETURNS DESCRIPTION
list[str]

list[str]: Raw matched lines (including leading whitespace) for each dunder def.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_magic_methods_overuse(code: str) -> list[str]:
    """Collect all dunder-method definitions (``def __xxx__``) in *code*.

    Args:
        code (str): Python source text to scan.

    Returns:
        list[str]: Raw matched lines (including leading whitespace) for each dunder ``def``.
    """
    return re.findall(r"^\s*def\s+__\w+__", code, flags=re.MULTILINE)

detect_multiple_implementations

detect_multiple_implementations(files)

Identify function names defined in more than one file.

Scans each file for top-level def statements and groups them by function name. Any name appearing in two or more files produces a DuplicateFinding.

PARAMETER DESCRIPTION
files

{filename: source_code} mapping.

TYPE: dict[str, str]

RETURNS DESCRIPTION
list[DuplicateFinding]

list[DuplicateFinding]: One DuplicateFinding per duplicated function name.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_multiple_implementations(files: dict[str, str]) -> list[DuplicateFinding]:
    """Identify function names defined in more than one file.

    Scans each file for top-level ``def`` statements and groups them by
    function name.  Any name appearing in two or more files produces a
    ``DuplicateFinding``.

    Args:
        files (dict[str, str]): ``{filename: source_code}`` mapping.

    Returns:
        list[DuplicateFinding]: One ``DuplicateFinding`` per duplicated function name.
    """
    name_map: dict[str, list[str]] = {}
    for fname, code in files.items():
        for m in re.finditer(r"^def\s+(\w+)", code, flags=re.MULTILINE):
            name = m.group(1)
            name_map.setdefault(name, []).append(fname)
    duplicates: list[DuplicateFinding] = [
        DuplicateFinding(name=name, files=fns)
        for name, fns in name_map.items()
        if len(fns) > 1
    ]
    return duplicates

detect_god_classes

detect_god_classes(code, max_methods=10, max_lines=500)

Flag classes that exceed method-count or line-span thresholds.

Parses class boundaries by looking for class Foo at column 0 and counting indented def lines inside each block.

PARAMETER DESCRIPTION
code

Python source text.

TYPE: str

max_methods

Method-count ceiling. Default to 10.

TYPE: int DEFAULT: 10

max_lines

Line-span ceiling. Default to 500.

TYPE: int DEFAULT: 500

RETURNS DESCRIPTION
list[GodClassFinding]

list[GodClassFinding]: One GodClassFinding per class breaching either threshold.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_god_classes(
    code: str,
    max_methods: int = 10,
    max_lines: int = 500,
) -> list[GodClassFinding]:
    """Flag classes that exceed method-count or line-span thresholds.

    Parses class boundaries by looking for ``class Foo`` at column 0 and
    counting indented ``def`` lines inside each block.

    Args:
        code (str): Python source text.
        max_methods (int, optional): Method-count ceiling. Default to 10.
        max_lines (int, optional): Line-span ceiling. Default to 500.

    Returns:
        list[GodClassFinding]: One ``GodClassFinding`` per class breaching either threshold.
    """
    results: list[GodClassFinding] = []
    lines = code.splitlines()
    current_class = None
    class_start = 0
    method_count = 0
    for i, ln in enumerate(lines, start=1):
        if m := re.match(r"^class\s+(\w+)", ln):
            # close previous
            if current_class:
                length = i - class_start
                if method_count > max_methods or length > max_lines:
                    results.append(
                        GodClassFinding(
                            name=current_class,
                            method_count=method_count,
                            lines=length,
                        ),
                    )
            current_class = m[1]
            class_start = i
            method_count = 0
        elif current_class and re.match(r"^\s+def\s+\w+", ln):
            method_count += 1

    if current_class:
        length = len(lines) - class_start + 1
        if method_count > max_methods or length > max_lines:
            results.append(
                GodClassFinding(
                    name=current_class,
                    method_count=method_count,
                    lines=length,
                ),
            )
    return results

detect_deep_inheritance

detect_deep_inheritance(code_map, max_depth=3)

Discover inheritance chains deeper than max_depth across multiple files.

Builds a parent map by regex-parsing class Foo(Bar, Baz) declarations, then walks chains recursively. Cycles are detected and short-circuited.

PARAMETER DESCRIPTION
code_map

{filepath: source_code} mapping for all files to consider.

TYPE: dict[str, str]

max_depth

Maximum allowed inheritance hops. Default to 3.

TYPE: int DEFAULT: 3

RETURNS DESCRIPTION
list[InheritanceFinding]

list[InheritanceFinding]: One InheritanceFinding per chain

list[InheritanceFinding]

exceeding max_depth.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_deep_inheritance(
    code_map: dict[str, str],
    max_depth: int = 3,
) -> list[InheritanceFinding]:
    """Discover inheritance chains deeper than *max_depth* across multiple files.

    Builds a parent map by regex-parsing ``class Foo(Bar, Baz)`` declarations,
    then walks chains recursively.  Cycles are detected and short-circuited.

    Args:
        code_map (dict[str, str]): ``{filepath: source_code}`` mapping for all
            files to consider.
        max_depth (int, optional): Maximum allowed inheritance hops. Default to 3.

    Returns:
        list[InheritanceFinding]: One ``InheritanceFinding`` per chain
        exceeding *max_depth*.
    """
    parent_map: dict[str, list[str]] = {}
    for code in code_map.values():
        for m in re.finditer(r"^class\s+(\w+)\(([^\)]+)\)", code, flags=re.MULTILINE):
            cls = m.group(1)
            parents = [p.strip().split()[0] for p in m.group(2).split(",") if p.strip()]
            parent_map[cls] = parents

    results: list[InheritanceFinding] = []

    def walk_chain(start: str, seen: set[str]) -> list[list[str] | str]:
        """Recursively trace the inheritance path from *start* to its ancestors."""
        if start in seen:
            return [start]
        seen = seen | {start}
        parents = parent_map.get(start, [])
        if not parents:
            return [start]
        chains = []
        for p in parents:
            chains.extend(
                [start] + (tail if isinstance(tail, list) else [tail])
                for tail in walk_chain(p, seen)
            )
        return chains

    for cls in parent_map:
        chains = walk_chain(cls, set())
        results.extend(
            InheritanceFinding(chain=ch, depth=len(ch) - 1)
            for ch in chains
            if isinstance(ch, list) and len(ch) - 1 > max_depth
        )
    return results

detect_dependency_cycles

detect_dependency_cycles(edges)

Find circular dependencies in a directed edge list via depth-first search.

PARAMETER DESCRIPTION
edges

[(source, target), …] import-dependency pairs.

TYPE: list[tuple[str, str]]

RETURNS DESCRIPTION
list[DependencyCycleFinding]

list[DependencyCycleFinding]: One DependencyCycleFinding per distinct cycle discovered.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_dependency_cycles(
    edges: list[tuple[str, str]],
) -> list[DependencyCycleFinding]:
    """Find circular dependencies in a directed edge list via depth-first search.

    Args:
        edges (list[tuple[str, str]]): ``[(source, target), …]`` import-dependency pairs.

    Returns:
        list[DependencyCycleFinding]: One ``DependencyCycleFinding`` per distinct cycle discovered.
    """
    adj: dict[str, list[str]] = {}
    for a, b in edges:
        adj.setdefault(a, []).append(b)
    results: list[DependencyCycleFinding] = []

    def dfs(node: str, path: list[str], seen: set[str]) -> None:
        """Recurse through the adjacency list, recording cycles when a node is revisited."""
        if node in path:
            idx = path.index(node)
            cycle = [*path[idx:], node]
            results.append(DependencyCycleFinding(cycle=cycle))
            return
        if node in seen:
            return
        seen.add(node)
        path.append(node)
        for nb in adj.get(node, []):
            dfs(nb, path, seen)
        path.pop()

    for n in adj:
        dfs(n, [], set())
    return results

detect_feature_envy

detect_feature_envy(code)

Heuristic: flag files where an external object's attributes are accessed more often than self.

Counts self.attr vs other.attr patterns globally across the file. When any single external name outweighs self references, a finding is emitted.

Note

This is a deliberately naive global heuristic — it counts all self.attr occurrences against all other.attr occurrences without scope isolation. Method-level granularity may be added in a future iteration.

PARAMETER DESCRIPTION
code

Python source text.

TYPE: str

RETURNS DESCRIPTION
list[FeatureEnvyFinding]

list[FeatureEnvyFinding]: At most one FeatureEnvyFinding per

list[FeatureEnvyFinding]

file.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_feature_envy(code: str) -> list[FeatureEnvyFinding]:
    """Heuristic: flag files where an external object's attributes are accessed more often than ``self``.

    Counts ``self.attr`` vs ``other.attr`` patterns globally across the file.
    When any single external name outweighs ``self`` references, a finding
    is emitted.

    Note:
        This is a deliberately naive global heuristic — it counts all
        ``self.attr`` occurrences against all ``other.attr`` occurrences
        without scope isolation.  Method-level granularity may be added
        in a future iteration.

    Args:
        code (str): Python source text.

    Returns:
        list[FeatureEnvyFinding]: At most one ``FeatureEnvyFinding`` per
        file.
    """
    results: list[FeatureEnvyFinding] = []
    for _ in re.finditer(r"def\s+(\w+)\(|class\s+(\w+)", code):
        pass
    self_refs = len(re.findall(r"self\.[a-zA-Z_]+", code))
    others = re.findall(r"(\w+)\.[a-zA-Z_]+", code)
    other_counts: dict[str, int] = {}
    for o in others:
        if o == "self":
            continue
        other_counts[o] = other_counts.get(o, 0) + 1
    if other_counts:
        top = max(other_counts.items(), key=lambda kv: kv[1])
        if top[1] > self_refs:
            results.append(
                FeatureEnvyFinding(
                    method="<unknown>",
                    target_class=top[0],
                    occurrences=top[1],
                ),
            )
    return results

detect_sparse_code

detect_sparse_code(code, max_statements_per_line=1)

Flag lines containing multiple semicolon-separated statements.

Comment-only lines are skipped.

PARAMETER DESCRIPTION
code

Python source text.

TYPE: str

max_statements_per_line

Maximum allowed statements per line. Default to 1.

TYPE: int DEFAULT: 1

RETURNS DESCRIPTION
list[SparseCodeFinding]

list[SparseCodeFinding]: One SparseCodeFinding per offending line.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_sparse_code(
    code: str,
    max_statements_per_line: int = 1,
) -> list[SparseCodeFinding]:
    """Flag lines containing multiple semicolon-separated statements.

    Comment-only lines are skipped.

    Args:
        code (str): Python source text.
        max_statements_per_line (int, optional): Maximum allowed statements per line. Default to 1.

    Returns:
        list[SparseCodeFinding]: One ``SparseCodeFinding`` per offending line.
    """
    results: list[SparseCodeFinding] = []
    for i, line in enumerate(code.splitlines(), start=1):
        if line.strip().startswith("#"):
            continue
        statements = [seg for seg in line.split(";") if seg.strip()]
        if len(statements) > max_statements_per_line:
            results.append(SparseCodeFinding(line=i, statements=len(statements)))
    return results

detect_inconsistent_naming_styles

detect_inconsistent_naming_styles(code)

Check whether function names mix snake_case, camelCase, or PascalCase.

PARAMETER DESCRIPTION
code

Python source text.

TYPE: str

RETURNS DESCRIPTION
list[ConsistencyFinding]

list[ConsistencyFinding]: A single-element list when more than one style is detected,

list[ConsistencyFinding]

otherwise an empty list.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_inconsistent_naming_styles(code: str) -> list[ConsistencyFinding]:
    """Check whether function names mix ``snake_case``, ``camelCase``, or ``PascalCase``.

    Args:
        code (str): Python source text.

    Returns:
        list[ConsistencyFinding]: A single-element list when more than one style is detected,
        otherwise an empty list.
    """
    styles: set[str] = set()
    for m in re.finditer(r"^def\s+([A-Za-z_][A-Za-z0-9_]*)", code, flags=re.MULTILINE):
        name = m.group(1)
        if re.match(r"^_?[a-z][a-z0-9_]*$", name):
            styles.add("snake_case")
        elif re.match(r"^[a-z][A-Za-z0-9]*$", name):
            styles.add("camelCase")
        elif re.match(r"^[A-Z][A-Za-z0-9]*$", name):
            styles.add("PascalCase")
        else:
            styles.add("other")
    if len(styles) > 1:
        return [ConsistencyFinding(naming_styles=sorted(styles))]
    return []

detect_missing_type_hints

detect_missing_type_hints(code)

Walk the AST to find functions with unannotated parameters (excluding self).

PARAMETER DESCRIPTION
code

Python source text.

TYPE: str

RETURNS DESCRIPTION
list[ExplicitnessFinding]

list[ExplicitnessFinding]: One ExplicitnessFinding per function with at least one missing annotation.

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_missing_type_hints(code: str) -> list[ExplicitnessFinding]:
    """Walk the AST to find functions with unannotated parameters (excluding ``self``).

    Args:
        code (str): Python source text.

    Returns:
        list[ExplicitnessFinding]: One ``ExplicitnessFinding`` per function with at least one missing annotation.
    """
    results: list[ExplicitnessFinding] = []
    try:
        tree = ast.parse(code)
    except SyntaxError:
        return results
    for node in ast.walk(tree):
        if isinstance(node, ast.FunctionDef):
            missing: list[str] = []
            for arg in node.args.args:
                if arg.arg == "self":
                    continue
                if arg.annotation is None:
                    missing.append(arg.arg)
            if missing:
                results.append(
                    ExplicitnessFinding(function=node.name, missing_params=missing),
                )
    return results

detect_namespace_usage

detect_namespace_usage(code)

Count top-level symbols and __all__ entries to gauge namespace pollution.

PARAMETER DESCRIPTION
code

Python source text.

TYPE: str

RETURNS DESCRIPTION
NamespaceFinding

A NamespaceFinding with symbol count and optional export count.

TYPE: NamespaceFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_namespace_usage(code: str) -> NamespaceFinding:
    """Count top-level symbols and ``__all__`` entries to gauge namespace pollution.

    Args:
        code (str): Python source text.

    Returns:
        NamespaceFinding: A ``NamespaceFinding`` with symbol count and optional export count.
    """
    top_level_symbols = 0
    export_count: int | None = None
    try:
        tree = ast.parse(code)
    except SyntaxError:
        return NamespaceFinding(top_level_symbols=0, export_count=None)
    for node in tree.body:
        if isinstance(
            node,
            (
                ast.FunctionDef,
                ast.ClassDef,
                ast.Assign,
                ast.AnnAssign,
                ast.Import,
                ast.ImportFrom,
            ),
        ):
            top_level_symbols += 1
        if isinstance(node, ast.Assign):
            for target in node.targets:
                if isinstance(target, ast.Name) and target.id == "__all__":
                    if isinstance(node.value, (ast.List, ast.Tuple)):
                        export_count = len(node.value.elts)
                    else:
                        export_count = None
    return NamespaceFinding(
        top_level_symbols=top_level_symbols,
        export_count=export_count,
    )

detect_ts_any_usage

detect_ts_any_usage(code)

Count occurrences of the any type keyword in TypeScript source.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsAnyFinding

A TsAnyFinding with the match count.

TYPE: TsAnyFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_any_usage(code: str) -> TsAnyFinding:
    """Count occurrences of the ``any`` type keyword in TypeScript source.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsAnyFinding: A ``TsAnyFinding`` with the match count.
    """
    count = len(re.findall(r"\bany\b", code))
    return TsAnyFinding(count=count)

detect_ts_object_type_aliases

detect_ts_object_type_aliases(code)

Count type X = { … } object-literal type aliases in TypeScript.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsTypeAliasFinding

A TsTypeAliasFinding with the match count.

TYPE: TsTypeAliasFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_object_type_aliases(code: str) -> TsTypeAliasFinding:
    """Count ``type X = { … }`` object-literal type aliases in TypeScript.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsTypeAliasFinding: A ``TsTypeAliasFinding`` with the match count.
    """
    count = len(re.findall(r"\btype\s+\w+\s*=\s*\{", code))
    return TsTypeAliasFinding(count=count)

detect_ts_missing_return_types

detect_ts_missing_return_types(code)

Count exported functions lacking an explicit return-type annotation.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsReturnTypeFinding

A TsReturnTypeFinding with the match count.

TYPE: TsReturnTypeFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_missing_return_types(code: str) -> TsReturnTypeFinding:
    """Count exported functions lacking an explicit return-type annotation.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsReturnTypeFinding: A ``TsReturnTypeFinding`` with the match count.
    """
    count = len(re.findall(r"export\s+function\s+\w+\s*\([^)]*\)\s*\{", code))
    return TsReturnTypeFinding(count=count)

detect_ts_readonly_usage

detect_ts_readonly_usage(code)

Count readonly modifier occurrences in TypeScript source.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsReadonlyFinding

A TsReadonlyFinding with the match count.

TYPE: TsReadonlyFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_readonly_usage(code: str) -> TsReadonlyFinding:
    """Count ``readonly`` modifier occurrences in TypeScript source.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsReadonlyFinding: A ``TsReadonlyFinding`` with the match count.
    """
    count = len(re.findall(r"\breadonly\b", code))
    return TsReadonlyFinding(count=count)

detect_ts_type_assertions

detect_ts_type_assertions(code)

Count as T type-assertion expressions in TypeScript source.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsAssertionFinding

A TsAssertionFinding with the match count.

TYPE: TsAssertionFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_type_assertions(code: str) -> TsAssertionFinding:
    """Count ``as T`` type-assertion expressions in TypeScript source.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsAssertionFinding: A ``TsAssertionFinding`` with the match count.
    """
    count = len(re.findall(r"\bas\s+\w+", code))
    return TsAssertionFinding(count=count)

detect_ts_utility_types

detect_ts_utility_types(code)

Count built-in utility-type references (Partial, Pick, Omit, Record, Readonly).

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsUtilityTypeFinding

A TsUtilityTypeFinding with the combined match count.

TYPE: TsUtilityTypeFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_utility_types(code: str) -> TsUtilityTypeFinding:
    """Count built-in utility-type references (``Partial``, ``Pick``, ``Omit``, ``Record``, ``Readonly``).

    Args:
        code (str): TypeScript source text.

    Returns:
        TsUtilityTypeFinding: A ``TsUtilityTypeFinding`` with the combined match count.
    """
    count = len(re.findall(r"\b(Partial|Pick|Omit|Record|Readonly)\b", code))
    return TsUtilityTypeFinding(count=count)

detect_ts_non_null_assertions

detect_ts_non_null_assertions(code)

Count non-null assertion operators (expr!) in TypeScript source.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsNonNullFinding

A TsNonNullFinding with the match count.

TYPE: TsNonNullFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_non_null_assertions(code: str) -> TsNonNullFinding:
    """Count non-null assertion operators (``expr!``) in TypeScript source.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsNonNullFinding: A ``TsNonNullFinding`` with the match count.
    """
    count = len(re.findall(r"\b\w+!", code))
    return TsNonNullFinding(count=count)

detect_ts_plain_enum_objects

detect_ts_plain_enum_objects(code)

Count const X = { patterns that may function as ad-hoc enums.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsEnumObjectFinding

A TsEnumObjectFinding with the match count.

TYPE: TsEnumObjectFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_plain_enum_objects(code: str) -> TsEnumObjectFinding:
    """Count ``const X = {`` patterns that may function as ad-hoc enums.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsEnumObjectFinding: A ``TsEnumObjectFinding`` with the match count.
    """
    count = len(re.findall(r"\bconst\s+\w+\s*=\s*\{", code))
    return TsEnumObjectFinding(count=count)

detect_ts_unknown_over_any

detect_ts_unknown_over_any(code)

Compare any vs unknown keyword usage in TypeScript source.

A high any_count relative to unknown_count suggests the codebase should migrate toward unknown for safer type narrowing.

PARAMETER DESCRIPTION
code

TypeScript source text.

TYPE: str

RETURNS DESCRIPTION
TsUnknownAnyFinding

A TsUnknownAnyFinding with both counts.

TYPE: TsUnknownAnyFinding

Source code in src/mcp_zen_of_languages/rules/tools/detections.py
def detect_ts_unknown_over_any(code: str) -> TsUnknownAnyFinding:
    """Compare ``any`` vs ``unknown`` keyword usage in TypeScript source.

    A high ``any_count`` relative to ``unknown_count`` suggests the codebase
    should migrate toward ``unknown`` for safer type narrowing.

    Args:
        code (str): TypeScript source text.

    Returns:
        TsUnknownAnyFinding: A ``TsUnknownAnyFinding`` with both counts.
    """
    any_count = len(re.findall(r"\bany\b", code))
    unknown_count = len(re.findall(r"\bunknown\b", code))
    return TsUnknownAnyFinding(any_count=any_count, unknown_count=unknown_count)

mcp_zen_of_languages.rules.mapping_export

Export the live rule-to-detector registry as a JSON mapping.

Intended consumers are CI dashboards, coverage reports, and developer tooling that need a static snapshot of which detectors enforce which zen principles. The export format includes per-language rule counts, detector IDs, and a reverse mapping (detector → rules) for cross-referencing.

Functions

build_rule_detector_mapping

build_rule_detector_mapping(languages=None)

Assemble a JSON-serialisable mapping from principles to their detectors.

For each language the output contains:

  • rules_count / detectors_count — aggregate totals.
  • mapping{rule_id: {principle, detectors, …}}.
  • reverse_mapping{detector_id: [rule_id, …]}.
PARAMETER DESCRIPTION
languages

Restrict output to these language keys. None includes every language in the registry. Default to None.

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
dict[str, Any]

dict[str, Any]: Nested dict ready for json.dumps serialisation.

Source code in src/mcp_zen_of_languages/rules/mapping_export.py
def build_rule_detector_mapping(
    languages: list[str] | None = None,
) -> dict[str, Any]:
    """Assemble a JSON-serialisable mapping from principles to their detectors.

    For each language the output contains:

    * ``rules_count`` / ``detectors_count`` — aggregate totals.
    * ``mapping`` — ``{rule_id: {principle, detectors, …}}``.
    * ``reverse_mapping`` — ``{detector_id: [rule_id, …]}``.

    Args:
        languages (list[str] | None, optional): Restrict output to these language keys.  ``None`` includes
            every language in the registry. Default to None.

    Returns:
        dict[str, Any]: Nested dict ready for ``json.dumps`` serialisation.
    """
    data: dict[str, Any] = {
        "$schema": "rule_detector_mapping_schema",
        "$comment": "Generated from the live detector registry.",
        "languages": {},
    }

    for language in languages or get_all_languages():
        lang_zen = get_language_zen(language)
        if not lang_zen:
            continue
        rules: dict[str, Any] = {}
        reverse_mapping: dict[str, set[str]] = {}

        for principle in lang_zen.principles:
            metas = REGISTRY.detectors_for_rule(principle.id, language)
            detector_ids = sorted({meta.detector_id for meta in metas})
            rules[principle.id] = {
                "principle": principle.principle,
                "detectors": detector_ids,
                "uncovered_violations": [],
                "coverage": "partial",
            }

        detector_ids = {
            meta.detector_id for meta in REGISTRY.items() if meta.language == language
        }
        for meta in REGISTRY.items():
            if meta.language != language:
                continue
            for rule_id in meta.rule_ids:
                reverse_mapping.setdefault(meta.detector_id, set()).add(rule_id)

        language_payload: dict[str, Any] = {
            "rules_count": len(lang_zen.principles),
            "detectors_count": len(detector_ids),
            "mapping": rules,
        }
        if reverse_mapping:
            language_payload["reverse_mapping"] = {
                detector_id: sorted(rule_ids)
                for detector_id, rule_ids in sorted(reverse_mapping.items())
            }
        data["languages"][language] = language_payload

    return data

export_mapping_json

export_mapping_json(output_path, languages=None)

Write the rule-to-detector mapping to output_path as pretty-printed JSON.

PARAMETER DESCRIPTION
output_path

Destination file (created or overwritten).

TYPE: str | Path

languages

Restrict to these language keys. None includes all. Default to None.

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
dict[str, Any]

dict[str, Any]: The same dict that was written to disk, for programmatic reuse.

Source code in src/mcp_zen_of_languages/rules/mapping_export.py
def export_mapping_json(
    output_path: str | Path,
    languages: list[str] | None = None,
) -> dict[str, Any]:
    """Write the rule-to-detector mapping to *output_path* as pretty-printed JSON.

    Args:
        output_path (str | Path): Destination file (created or overwritten).
        languages (list[str] | None, optional): Restrict to these language keys.  ``None`` includes all. Default to None.

    Returns:
        dict[str, Any]: The same dict that was written to disk, for programmatic reuse.
    """
    data = build_rule_detector_mapping(languages)
    path = Path(output_path)
    path.write_text(json.dumps(data, indent=2), encoding="utf-8")
    return data