Skip to content

Server Tools

These tools power the MCP server endpoints.

MCP server

mcp_zen_of_languages.server

MCP server exposing zen-of-languages analysis tools over the Model Context Protocol.

This module is the public surface of the zen analysis server. Every function decorated with @mcp.tool becomes an MCP tool that IDE assistants and automation agents can invoke. The module-level CONFIG singleton, loaded once from zen-config.yaml via load_config, governs default thresholds, language lists, and pipeline overrides for the entire session.

Tool registration follows the FastMCP decorator pattern:

@mcp.tool(name="analyze_zen_violations", tags={"analysis", "zen", "snippet"})
async def analyze_zen_violations(code: str, language: str) -> AnalysisResult: ...

The tools are grouped into four families:

  • Analysis — snippet and repository-level violation detection.
  • Reporting — prompt generation, agent task lists, and markdown reports.
  • Configuration — runtime override management and introspection.
  • Onboarding — guided setup for new projects adopting zen analysis.
Note

Runtime overrides set via set_config_override are stored in the module-level _runtime_overrides dict and persist only for the current server session.

Classes

LanguageCoverage

Bases: BaseModel

Per-language counts exposed by zen://languages resource.

LanguagesResource

Bases: BaseModel

Container model for the zen://languages MCP resource.

ConfigOverride

Bases: BaseModel

Session-scoped override for a single language's analysis thresholds.

When an MCP client calls set_config_override, the supplied values are captured in a ConfigOverride instance and stored in the module-level _runtime_overrides dict, keyed by language. Only non-None fields are considered active — omitted fields leave the corresponding zen-config.yaml default in effect.

Note

Overrides do not persist across server restarts. Call clear_config_overrides to reset mid-session.

ConfigStatus

Bases: BaseModel

Read-only snapshot of the server's current configuration state.

Returned by get_config, set_config_override, and clear_config_overrides so callers can confirm the effective settings after every mutation. The overrides_applied field shows only the non-default values injected during the current session.

OnboardingStep

Bases: BaseModel

A single instruction in the guided onboarding sequence.

Each step pairs a human-readable title and description with an action key that MCP clients can use to trigger the corresponding operation programmatically, and an optional example showing concrete invocation syntax.

OnboardingGuide

Bases: BaseModel

Complete onboarding payload returned by onboard_project.

Bundles an ordered list of OnboardingStep entries with a recommended_config dict that reflects the thresholds appropriate for the caller's chosen strictness profile. MCP clients can render the steps as an interactive wizard or apply recommended_config directly to zen-config.yaml.

Functions

main

main()

Start the FastMCP server with stdio transport.

This mirrors mcp_zen_of_languages.__main__.main so packaging can expose a dedicated mcp-zen-of-languages-server console script without adding a separate runtime path.

Source code in src/mcp_zen_of_languages/server.py
def main() -> None:
    """Start the FastMCP server with stdio transport.

    This mirrors ``mcp_zen_of_languages.__main__.main`` so packaging can expose
    a dedicated ``mcp-zen-of-languages-server`` console script without adding a
    separate runtime path.
    """
    mcp.run()

config_resource

config_resource()

Return current configuration status as a read-only MCP resource.

Source code in src/mcp_zen_of_languages/server.py
@mcp.resource(
    "zen://config",
    name="zen_config_resource",
    title="Zen config resource",
    description="Read-only resource exposing current configuration and active overrides.",
    icons=RESOURCE_ICONS,
)
def config_resource() -> "ConfigStatus":
    """Return current configuration status as a read-only MCP resource."""
    return _build_config_status()

rules_resource

rules_resource(language)

Return canonical zen principles for the requested language key.

Source code in src/mcp_zen_of_languages/server.py
@mcp.resource(
    "zen://rules/{language}",
    name="zen_rules_resource",
    title="Zen rules resource",
    description="Read-only resource exposing canonical zen principles for a language.",
    icons=RESOURCE_ICONS,
)
def rules_resource(language: str) -> LanguageZenPrinciples:
    """Return canonical zen principles for the requested language key."""
    zen = get_language_zen(_canonical_language(language))
    if zen is None:
        msg = f"Unsupported language '{language}'."
        raise ValueError(msg)
    return zen

languages_resource

languages_resource()

Return supported languages with principle and detector counts.

Source code in src/mcp_zen_of_languages/server.py
@mcp.resource(
    "zen://languages",
    name="zen_languages_resource",
    title="Zen languages resource",
    description="Read-only resource listing language principle and detector coverage counts.",
    icons=RESOURCE_ICONS,
)
def languages_resource() -> LanguagesResource:
    """Return supported languages with principle and detector counts."""
    from mcp_zen_of_languages.analyzers.registry import REGISTRY

    entries: list[LanguageCoverage] = []
    for language in get_all_languages():
        detectors = [
            meta.detector_id
            for meta in REGISTRY.items()
            if meta.language in [language, "any"]
        ]
        zen = get_language_zen(language)
        entries.append(
            LanguageCoverage(
                language=language,
                principles=len(zen.principles) if zen else 0,
                detectors=len(detectors),
            ),
        )
    return LanguagesResource(languages=entries)

remediation_prompt

remediation_prompt(language, violations)

Build a typed remediation prompt template for MCP clients.

Source code in src/mcp_zen_of_languages/server.py
@mcp.prompt(
    name="zen_remediation_prompt",
    title="Zen remediation prompt",
    description="Generate a remediation prompt scaffold for violations in a language.",
    icons=PROMPT_RESOURCE_ICONS,
    tags={"prompts", "remediation"},
)
def remediation_prompt(language: str, violations: str) -> str:
    """Build a typed remediation prompt template for MCP clients."""
    return (
        "Context: Remediate zen violations in a codebase.\n"
        f"Language: {language}\n"
        "Goal: Produce precise, testable fixes aligned with zen principles.\n"
        f"Violations:\n{violations}\n"
        "Requirements:\n"
        "1. Prioritize highest severity findings first.\n"
        "2. Provide before/after guidance for each fix.\n"
        "3. Include verification steps for each remediation."
    )

detect_languages async

detect_languages(repo_path)

Return the language identifiers listed in the active zen-config.yaml.

Unlike heuristic language-detection libraries, this tool does not scan file extensions or parse shebangs. It simply reflects the languages key from the configuration that CONFIG loaded at server startup, giving callers a predictable, deterministic list they can iterate over when orchestrating multi-language analysis runs.

PARAMETER DESCRIPTION
repo_path

Workspace root passed by the MCP client — reserved for future per-repo config resolution but currently unused.

TYPE: str

RETURNS DESCRIPTION
LanguagesResult

LanguagesResult wrapping the list of language strings declared in

TYPE: LanguagesResult

LanguagesResult

zen-config.yaml (e.g. ["python", "typescript", "go"]).

Example
result = await detect_languages("/home/dev/myproject")
for lang in result.languages:
    await analyze_zen_violations(code, lang)
See Also

get_supported_languages: Lists languages that have registered detectors, rather than configured languages.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="detect_languages",
    title="Detect languages",
    description="Return supported language list for analysis.",
    tags={"languages", "metadata"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(LanguagesResult),
)
async def detect_languages(repo_path: str) -> LanguagesResult:
    """Return the language identifiers listed in the active ``zen-config.yaml``.

    Unlike heuristic language-detection libraries, this tool does **not**
    scan file extensions or parse shebangs.  It simply reflects the
    ``languages`` key from the configuration that ``CONFIG`` loaded at
    server startup, giving callers a predictable, deterministic list they
    can iterate over when orchestrating multi-language analysis runs.

    Args:
        repo_path (str): Workspace root passed by the MCP client — reserved
            for future per-repo config resolution but currently unused.

    Returns:
        LanguagesResult: LanguagesResult wrapping the list of language strings declared in
        ``zen-config.yaml`` (e.g. ``["python", "typescript", "go"]``).

    Example:
        ```python
        result = await detect_languages("/home/dev/myproject")
        for lang in result.languages:
            await analyze_zen_violations(code, lang)
        ```

    See Also:
        [`get_supported_languages`][mcp_zen_of_languages.server.get_supported_languages]:
            Lists languages that have registered detectors, rather than
            configured languages.

    """
    from mcp_zen_of_languages.models import LanguagesResult

    _ = repo_path
    return LanguagesResult(languages=CONFIG.languages)

analyze_zen_violations async

analyze_zen_violations(
    code,
    language,
    severity_threshold=None,
    perspective=PerspectiveMode.ALL,
    project_as=None,
    *,
    enable_external_tools=False,
    allow_temporary_runners=False,
)

Run v1.0 snippet analysis.

PARAMETER DESCRIPTION
code

Source code to analyse.

TYPE: str

language

Programming language identifier.

TYPE: str

severity_threshold

Minimum severity to include. Default to None.

TYPE: int | None DEFAULT: None

perspective

Requested analysis perspective. Default to PerspectiveMode.ALL.

TYPE: PerspectiveMode DEFAULT: PerspectiveMode.ALL

project_as

Projection-family target when perspective is projection.

TYPE: str | None DEFAULT: None

enable_external_tools

Opt-in execution of external linters. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_runners

Allow temporary tool runners (e.g. npx/uvx). Default to False.

TYPE: bool DEFAULT: False

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="analyze_zen_violations",
    version=ANALYZE_ZEN_VIOLATIONS_VERSION,
    title="Analyze zen violations",
    description="Analyze a code snippet against zen rules and return analysis results.",
    icons=ANALYSIS_TOOL_ICONS,
    tags={"analysis", "zen", "snippet"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(AnalysisResult),
)
async def analyze_zen_violations(  # noqa: PLR0913
    code: str,
    language: str,
    severity_threshold: int | None = None,
    perspective: PerspectiveMode = PerspectiveMode.ALL,
    project_as: str | None = None,
    *,
    enable_external_tools: bool = False,
    allow_temporary_runners: bool = False,
) -> AnalysisResult:
    """Run v1.0 snippet analysis.

    Args:
        code (str): Source code to analyse.
        language (str): Programming language identifier.
        severity_threshold (int | None, optional): Minimum severity to include. Default to None.
        perspective (PerspectiveMode, optional): Requested analysis perspective. Default to ``PerspectiveMode.ALL``.
        project_as (str | None, optional): Projection-family target when ``perspective`` is ``projection``.
        enable_external_tools (bool, optional): Opt-in execution of external linters. Default to False.
        allow_temporary_runners (bool, optional): Allow temporary tool runners (e.g. npx/uvx). Default to False.
    """
    return _analyze_snippet_internal(
        code=code,
        language=language,
        tool_version=ANALYZE_ZEN_VIOLATIONS_VERSION,
        severity_threshold=severity_threshold,
        perspective=perspective,
        project_as=project_as,
        enable_external_tools=enable_external_tools,
        allow_temporary_runners=allow_temporary_runners,
        reject_empty_code=False,
    )

analyze_zen_violations_v2 async

analyze_zen_violations_v2(
    code,
    language,
    severity_threshold=None,
    perspective=PerspectiveMode.ALL,
    project_as=None,
    *,
    enable_external_tools=False,
    allow_temporary_runners=False,
)

Run v2.0 snippet analysis with non-empty code validation.

PARAMETER DESCRIPTION
code

Source code to analyse.

TYPE: str

language

Programming language identifier.

TYPE: str

severity_threshold

Severity threshold. Default to None.

TYPE: int | None DEFAULT: None

perspective

Requested analysis perspective. Default to PerspectiveMode.ALL.

TYPE: PerspectiveMode DEFAULT: PerspectiveMode.ALL

project_as

Projection-family target when perspective is projection.

TYPE: str | None DEFAULT: None

enable_external_tools

Enable external tools. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_runners

Allow temporary runners. Default to False.

TYPE: bool DEFAULT: False

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="analyze_zen_violations",
    version=ANALYZE_ZEN_VIOLATIONS_V2_VERSION,
    title="Analyze zen violations (v2)",
    description=(
        "Analyze a code snippet against zen rules with stricter request-quality "
        "guardrails and richer telemetry metadata."
    ),
    icons=ANALYSIS_TOOL_ICONS,
    tags={"analysis", "zen", "snippet", "v2"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(AnalysisResult),
)
async def analyze_zen_violations_v2(  # noqa: PLR0913
    code: str,
    language: str,
    severity_threshold: int | None = None,
    perspective: PerspectiveMode = PerspectiveMode.ALL,
    project_as: str | None = None,
    *,
    enable_external_tools: bool = False,
    allow_temporary_runners: bool = False,
) -> AnalysisResult:
    """Run v2.0 snippet analysis with non-empty code validation.

    Args:
        code (str): Source code to analyse.
        language (str): Programming language identifier.
        severity_threshold (int | None, optional): Severity threshold. Default to None.
        perspective (PerspectiveMode, optional): Requested analysis perspective. Default to ``PerspectiveMode.ALL``.
        project_as (str | None, optional): Projection-family target when ``perspective`` is ``projection``.
        enable_external_tools (bool, optional): Enable external tools. Default to False.
        allow_temporary_runners (bool, optional): Allow temporary runners. Default to False.
    """
    return _analyze_snippet_internal(
        code=code,
        language=language,
        tool_version=ANALYZE_ZEN_VIOLATIONS_V2_VERSION,
        severity_threshold=severity_threshold,
        perspective=perspective,
        project_as=project_as,
        enable_external_tools=enable_external_tools,
        allow_temporary_runners=allow_temporary_runners,
        reject_empty_code=True,
    )

generate_prompts_tool async

generate_prompts_tool(
    code,
    language,
    perspective=PerspectiveMode.ALL,
    project_as=None,
    *,
    enable_external_tools=False,
    allow_temporary_runners=False,
)

Generate remediation prompts for v1.0 prompt generation.

PARAMETER DESCRIPTION
code

Source code to analyse.

TYPE: str

language

Programming language identifier.

TYPE: str

perspective

Requested analysis perspective. Default to PerspectiveMode.ALL.

TYPE: PerspectiveMode DEFAULT: PerspectiveMode.ALL

project_as

Projection-family target when perspective is projection.

TYPE: str | None DEFAULT: None

enable_external_tools

Enable external tools. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_runners

Allow temporary runners. Default to False.

TYPE: bool DEFAULT: False

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="generate_prompts",
    version=GENERATE_PROMPTS_VERSION,
    title="Generate remediation prompts",
    description="Generate remediation prompts from zen analysis results.",
    icons=PROMPT_TOOL_ICONS,
    tags={"prompts", "remediation"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(PromptBundle),
)
async def generate_prompts_tool(  # noqa: PLR0913
    code: str,
    language: str,
    perspective: PerspectiveMode = PerspectiveMode.ALL,
    project_as: str | None = None,
    *,
    enable_external_tools: bool = False,
    allow_temporary_runners: bool = False,
) -> PromptBundle:
    """Generate remediation prompts for v1.0 prompt generation.

    Args:
        code (str): Source code to analyse.
        language (str): Programming language identifier.
        perspective (PerspectiveMode, optional): Requested analysis perspective. Default to ``PerspectiveMode.ALL``.
        project_as (str | None, optional): Projection-family target when ``perspective`` is ``projection``.
        enable_external_tools (bool, optional): Enable external tools. Default to False.
        allow_temporary_runners (bool, optional): Allow temporary runners. Default to False.
    """
    return _generate_prompts_internal(
        code=code,
        language=language,
        tool_version=GENERATE_PROMPTS_VERSION,
        perspective=perspective,
        project_as=project_as,
        enable_external_tools=enable_external_tools,
        allow_temporary_runners=allow_temporary_runners,
    )

generate_prompts_tool_v2 async

generate_prompts_tool_v2(
    code,
    language,
    perspective=PerspectiveMode.ALL,
    project_as=None,
    *,
    enable_external_tools=False,
    allow_temporary_runners=False,
)

Generate remediation prompts for v2.0 prompt generation.

PARAMETER DESCRIPTION
code

Source code to analyse.

TYPE: str

language

Programming language identifier.

TYPE: str

perspective

Requested analysis perspective. Default to PerspectiveMode.ALL.

TYPE: PerspectiveMode DEFAULT: PerspectiveMode.ALL

project_as

Projection-family target when perspective is projection.

TYPE: str | None DEFAULT: None

enable_external_tools

Enable external tools. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_runners

Allow temporary runners. Default to False.

TYPE: bool DEFAULT: False

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="generate_prompts",
    version=GENERATE_PROMPTS_V2_VERSION,
    title="Generate remediation prompts (v2)",
    description=(
        "Generate remediation prompts with MCP-first guidance metadata and v2 "
        "versioned prompt semantics."
    ),
    icons=PROMPT_TOOL_ICONS,
    tags={"prompts", "remediation", "v2"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(PromptBundle),
)
async def generate_prompts_tool_v2(  # noqa: PLR0913
    code: str,
    language: str,
    perspective: PerspectiveMode = PerspectiveMode.ALL,
    project_as: str | None = None,
    *,
    enable_external_tools: bool = False,
    allow_temporary_runners: bool = False,
) -> PromptBundle:
    """Generate remediation prompts for v2.0 prompt generation.

    Args:
        code (str): Source code to analyse.
        language (str): Programming language identifier.
        perspective (PerspectiveMode, optional): Requested analysis perspective. Default to ``PerspectiveMode.ALL``.
        project_as (str | None, optional): Projection-family target when ``perspective`` is ``projection``.
        enable_external_tools (bool, optional): Enable external tools. Default to False.
        allow_temporary_runners (bool, optional): Allow temporary runners. Default to False.
    """
    return _generate_prompts_internal(
        code=code,
        language=language,
        tool_version=GENERATE_PROMPTS_V2_VERSION,
        perspective=perspective,
        project_as=project_as,
        enable_external_tools=enable_external_tools,
        allow_temporary_runners=allow_temporary_runners,
    )

analyze_repository async

analyze_repository(
    repo_path,
    languages=None,
    max_files=100,
    ctx=None,
    *,
    enable_external_tools=False,
    allow_temporary_runners=False,
)

Analyse every eligible file in a repository and return per-file results.

This is the public MCP tool that wraps _analyze_repository_internal. It exists as a thin async façade so that the internal helper can also be called from non-tool code paths (such as generate_agent_tasks_tool) without duplicating parameter validation or the @mcp.tool decorator.

PARAMETER DESCRIPTION
repo_path

Absolute path to the repository root. The MCP client typically resolves this from the active workspace.

TYPE: str

languages

Restrict analysis to specific language identifiers. Defaults to ["python"] internally.

TYPE: list[str] | None DEFAULT: None

max_files

Per-language cap on the number of files to analyse, protecting against excessive runtime on monorepos. Default to 100.

TYPE: int DEFAULT: 100

ctx

Optional FastMCP context for progress and log updates during repository analysis. Default to None.

TYPE: fastmcp.Context | None DEFAULT: None

enable_external_tools

Opt-in execution of allow-listed external tools while analyzing files. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_runners

Permit temporary-runner fallback strategies for external tools. Default to False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
list[RepositoryAnalysis]

list[RepositoryAnalysis]: List of RepositoryAnalysis entries, each pairing a file path

list[RepositoryAnalysis]

and language with its AnalysisResult.

Example
entries = await analyze_repository(
    "/home/dev/myproject", languages=["python", "go"], max_files=50
)
for entry in entries:
    print(entry.path, entry.result.overall_score)
See Also

generate_agent_tasks_tool: Builds actionable remediation tasks from repository analysis.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="analyze_repository",
    title="Analyze repository",
    description="Analyze a repository path and return per-file analysis results.",
    tags={"analysis", "repository"},
    annotations=READONLY_ANNOTATIONS,
    task=BACKGROUND_TASK,
)
async def analyze_repository(  # noqa: PLR0913
    repo_path: str,
    languages: list[str] | None = None,
    max_files: int = 100,
    ctx: fastmcp.Context | None = None,
    *,
    enable_external_tools: bool = False,
    allow_temporary_runners: bool = False,
) -> list[RepositoryAnalysis]:
    """Analyse every eligible file in a repository and return per-file results.

    This is the public MCP tool that wraps
    ``_analyze_repository_internal``.  It exists as a thin async façade so
    that the internal helper can also be called from non-tool code paths
    (such as ``generate_agent_tasks_tool``) without duplicating parameter
    validation or the ``@mcp.tool`` decorator.

    Args:
        repo_path (str): Absolute path to the repository root.  The MCP
            client typically resolves this from the active workspace.
        languages (list[str] | None, optional): Restrict analysis to specific
            language identifiers.  Defaults to ``["python"]`` internally.
        max_files (int, optional): Per-language cap on the number of files to
            analyse, protecting against excessive runtime on monorepos. Default to 100.
        ctx (fastmcp.Context | None, optional): Optional FastMCP context for progress
            and log updates during repository analysis. Default to None.
        enable_external_tools (bool, optional): Opt-in execution of allow-listed
            external tools while analyzing files. Default to False.
        allow_temporary_runners (bool, optional): Permit temporary-runner fallback
            strategies for external tools. Default to False.

    Returns:
        list[RepositoryAnalysis]: List of ``RepositoryAnalysis`` entries, each pairing a file path
        and language with its ``AnalysisResult``.

    Example:
        ```python
        entries = await analyze_repository(
            "/home/dev/myproject", languages=["python", "go"], max_files=50
        )
        for entry in entries:
            print(entry.path, entry.result.overall_score)
        ```

    See Also:
        [`generate_agent_tasks_tool`][mcp_zen_of_languages.server.generate_agent_tasks_tool]:
            Builds actionable remediation tasks from repository analysis.

    """
    return await _analyze_repository_internal(
        repo_path,
        languages,
        max_files,
        ctx,
        enable_external_tools=enable_external_tools,
        allow_temporary_runners=allow_temporary_runners,
    )

analyze_batch async

analyze_batch(
    path,
    language,
    cursor=None,
    max_tokens=8000,
    max_files=100,
    *,
    enable_external_tools=False,
    allow_temporary_runners=False,
)

Analyse a repository and return one token-budgeted page of violations.

This tool is explicitly designed for LLM agent workflows where the full violation list would exceed the model's context window. It analyses the repository, sorts violations globally by severity (highest first), and returns only as many as fit within max_tokens. The opaque cursor field in the response encodes the resume position; pass it back unchanged on the next call to advance to the next page.

Design principles:

  • Stateless — the cursor encodes the exact position in the sorted violation list; no server-side session state is required.
  • Token-budget aware — the response is trimmed so that the serialised payload stays within max_tokens.
  • Priority ordering — highest-severity violations are surfaced first across all pages.
PARAMETER DESCRIPTION
path

Absolute or relative path to the repository root.

TYPE: str

language

Language identifier to restrict analysis (e.g. "python").

TYPE: str

cursor

Opaque continuation token from a previous call. Omit or pass None to start from the first page. Default to None.

TYPE: str | None DEFAULT: None

max_tokens

Approximate token budget for the violations payload. Violations are added until the budget would be exceeded; the envelope overhead is excluded from this count. Default to 8000.

TYPE: int DEFAULT: 8000

max_files

Cap on the number of files to analyse. Default to 100.

TYPE: int DEFAULT: 100

enable_external_tools

Opt-in execution of external linters. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_runners

Permit temporary-runner strategies. Default to False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
BatchPage

A page carrying the token-budgeted violations, a continuation

TYPE: BatchPage

BatchPage

cursor (None when all violations have been returned), and file

BatchPage

count metadata.

Example
# First page
page = await analyze_batch("/repo", "python", max_tokens=4000)
while page.has_more:
    page = await analyze_batch("/repo", "python", cursor=page.cursor)
See Also

analyze_batch_summary: Returns a compact health-score overview that always fits in one context window. analyze_repository: Full (unpaginated) repository analysis for non-LLM consumers.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="analyze_batch",
    title="Analyze repository (batch / LLM-safe)",
    description=(
        "Analyse a repository path and return token-budgeted, paginated violations "
        "designed for LLM context windows. Highest-severity violations appear first. "
        "Pass the returned cursor to resume from the next page."
    ),
    tags={"analysis", "batch", "pagination"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(BatchPage),
    task=BACKGROUND_TASK,
)
async def analyze_batch(  # noqa: PLR0913
    path: str,
    language: str,
    cursor: str | None = None,
    max_tokens: int = 8000,
    max_files: int = 100,
    *,
    enable_external_tools: bool = False,
    allow_temporary_runners: bool = False,
) -> BatchPage:
    """Analyse a repository and return one token-budgeted page of violations.

    This tool is explicitly designed for LLM agent workflows where the
    full violation list would exceed the model's context window.  It
    analyses the repository, sorts violations globally by severity
    (highest first), and returns only as many as fit within *max_tokens*.
    The opaque *cursor* field in the response encodes the resume position;
    pass it back unchanged on the next call to advance to the next page.

    Design principles:

    * **Stateless** — the cursor encodes the exact position in the sorted
      violation list; no server-side session state is required.
    * **Token-budget aware** — the response is trimmed so that the
      serialised payload stays within *max_tokens*.
    * **Priority ordering** — highest-severity violations are surfaced
      first across all pages.

    Args:
        path (str): Absolute or relative path to the repository root.
        language (str): Language identifier to restrict analysis (e.g. ``"python"``).
        cursor (str | None, optional): Opaque continuation token from a previous
            call.  Omit or pass ``None`` to start from the first page. Default to None.
        max_tokens (int, optional): Approximate token budget for the ``violations``
            payload.  Violations are added until the budget would be exceeded;
            the envelope overhead is excluded from this count. Default to 8000.
        max_files (int, optional): Cap on the number of files to analyse.
            Default to 100.
        enable_external_tools (bool, optional): Opt-in execution of external linters.
            Default to False.
        allow_temporary_runners (bool, optional): Permit temporary-runner strategies.
            Default to False.

    Returns:
        BatchPage: A page carrying the token-budgeted violations, a continuation
        cursor (``None`` when all violations have been returned), and file
        count metadata.

    Example:
        ```python
        # First page
        page = await analyze_batch("/repo", "python", max_tokens=4000)
        while page.has_more:
            page = await analyze_batch("/repo", "python", cursor=page.cursor)
        ```

    See Also:
        [`analyze_batch_summary`][mcp_zen_of_languages.server.analyze_batch_summary]:
            Returns a compact health-score overview that always fits in one
            context window.
        [`analyze_repository`][mcp_zen_of_languages.server.analyze_repository]:
            Full (unpaginated) repository analysis for non-LLM consumers.
    """
    canonical_language = _canonical_language(language)
    results = await _analyze_repository_internal(
        path,
        [canonical_language],
        max_files,
        None,
        enable_external_tools=enable_external_tools,
        allow_temporary_runners=allow_temporary_runners,
    )
    files_total = len(results)

    all_violations = _build_batch_violations_list(results)

    start_index, _ = _decode_cursor(cursor) if cursor else (0, 0)
    start_index = max(0, min(start_index, len(all_violations)))

    page_violations: list[BatchViolation] = []
    used_tokens = 0
    token_budget = max(1, max_tokens - _BATCH_ENVELOPE_TOKEN_OVERHEAD)

    next_index = start_index
    for bv in all_violations[start_index:]:
        serialised = json.dumps(bv.model_dump())
        cost = _estimate_tokens(serialised)
        if used_tokens + cost > token_budget:
            if not page_violations:
                # Single violation is too large for the budget; skip it so the
                # cursor advances and we do not get stuck on the same item.
                next_index += 1
            break
        page_violations.append(bv)
        used_tokens += cost
        next_index += 1

    has_more = next_index < len(all_violations)
    next_cursor = _encode_cursor(next_index) if has_more else None

    # files_in_page: distinct files represented in this page's violations
    files_in_page = len({bv.file for bv in page_violations})
    # Use a fixed logical page size (independent of token budget) so that the
    # page number is stable across calls with different max_tokens values.
    _logical_page_size = 50
    page_num = start_index // _logical_page_size + 1 if all_violations else 1

    return BatchPage(
        cursor=next_cursor,
        page=page_num,
        has_more=has_more,
        violations=page_violations,
        files_in_page=files_in_page,
        files_total=files_total,
    )

analyze_batch_summary async

analyze_batch_summary(
    path,
    language,
    max_files=100,
    *,
    enable_external_tools=False,
    allow_temporary_runners=False,
)

Return a compact health overview for a repository — always one page.

Unlike analyze_batch, which paginates a potentially large violation list, this tool summarises the entire repository in a single response. It is designed to fit comfortably inside any LLM context window so that an agent can assess project health and identify where to focus before deciding whether to call analyze_batch for deeper detail.

The returned health_score is the repository's average overall_score expressed on a 0-100 scale (higher is better). The hotspots list contains the five files with the highest violation count, ordered by descending total violations.

PARAMETER DESCRIPTION
path

Absolute or relative path to the repository root.

TYPE: str

language

Language identifier to restrict analysis (e.g. "python").

TYPE: str

max_files

Cap on the number of files to analyse. Default to 100.

TYPE: int DEFAULT: 100

enable_external_tools

Opt-in execution of external linters. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_runners

Permit temporary-runner strategies. Default to False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
BatchSummary

Compact summary with health_score (0-100),

TYPE: BatchSummary

BatchSummary

up to five hotspots, total_violations, and total_files.

Example
summary = await analyze_batch_summary("/repo", "python")
print(summary.health_score, summary.hotspots)
See Also

analyze_batch: Full paginated violation detail for LLM agents. analyze_repository: Complete unpaginated results for non-LLM consumers.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="analyze_batch_summary",
    title="Analyze repository — batch summary",
    description=(
        "Return a compact project health score and top-5 hotspot files from a "
        "repository scan. Always fits within a single LLM context window. "
        "Use this before analyze_batch to decide whether full pagination is needed."
    ),
    tags={"analysis", "batch", "summary"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(BatchSummary),
    task=BACKGROUND_TASK,
)
async def analyze_batch_summary(
    path: str,
    language: str,
    max_files: int = 100,
    *,
    enable_external_tools: bool = False,
    allow_temporary_runners: bool = False,
) -> BatchSummary:
    """Return a compact health overview for a repository — always one page.

    Unlike ``analyze_batch``, which paginates a potentially large violation
    list, this tool summarises the entire repository in a single response.
    It is designed to fit comfortably inside any LLM context window so that
    an agent can assess project health and identify where to focus before
    deciding whether to call ``analyze_batch`` for deeper detail.

    The returned ``health_score`` is the repository's average ``overall_score``
    expressed on a 0-100 scale (higher is better).  The ``hotspots`` list
    contains the five files with the highest violation count, ordered by
    descending total violations.

    Args:
        path (str): Absolute or relative path to the repository root.
        language (str): Language identifier to restrict analysis (e.g. ``"python"``).
        max_files (int, optional): Cap on the number of files to analyse.
            Default to 100.
        enable_external_tools (bool, optional): Opt-in execution of external linters.
            Default to False.
        allow_temporary_runners (bool, optional): Permit temporary-runner strategies.
            Default to False.

    Returns:
        BatchSummary: Compact summary with ``health_score`` (0-100),
        up to five ``hotspots``, ``total_violations``, and ``total_files``.

    Example:
        ```python
        summary = await analyze_batch_summary("/repo", "python")
        print(summary.health_score, summary.hotspots)
        ```

    See Also:
        [`analyze_batch`][mcp_zen_of_languages.server.analyze_batch]:
            Full paginated violation detail for LLM agents.
        [`analyze_repository`][mcp_zen_of_languages.server.analyze_repository]:
            Complete unpaginated results for non-LLM consumers.
    """
    canonical_language = _canonical_language(language)
    results = await _analyze_repository_internal(
        path,
        [canonical_language],
        max_files,
        None,
        enable_external_tools=enable_external_tools,
        allow_temporary_runners=allow_temporary_runners,
    )

    total_files = len(results)
    total_violations = sum(len(entry.result.violations) for entry in results)

    health_score = _compute_health_score(results)

    # Top-5 hotspots by violation count, tie-broken by top severity
    sorted_results = sorted(
        results,
        key=lambda e: (
            len(e.result.violations),
            max((v.severity for v in e.result.violations), default=0),
        ),
        reverse=True,
    )
    hotspots = [
        BatchHotspot(
            path=entry.path,
            language=entry.language,
            violations=len(entry.result.violations),
            top_severity=max((v.severity for v in entry.result.violations), default=0),
        )
        for entry in sorted_results[:5]
    ]

    return BatchSummary(
        health_score=health_score,
        hotspots=hotspots,
        total_violations=total_violations,
        total_files=total_files,
    )

analyze_batch_auto async

analyze_batch_auto(
    path,
    language,
    cursor=None,
    max_tokens=8000,
    max_files=100,
    *,
    enable_external_tools=False,
    allow_temporary_runners=False,
)

Analyse a repository, routing automatically between full and paginated mode.

This tool eliminates the need to choose between analyze_repository (full, unbounded) and analyze_batch (always paginated). It analyses the repository once, then decides:

  • If all violations fit within max_tokens: returns them all in a single page (has_more=False, cursor=None).
  • If violations exceed max_tokens: returns the first token-budgeted page with a continuation cursor (has_more=True).

Cursor continuation is handled identically to analyze_batch — pass the opaque cursor from the previous response unchanged.

PARAMETER DESCRIPTION
path

Absolute or relative path to the repository root.

TYPE: str

language

Language identifier to restrict analysis (e.g. "python").

TYPE: str

cursor

Opaque continuation token from a previous call. Omit or pass None to start from the first page. Default to None.

TYPE: str | None DEFAULT: None

max_tokens

Approximate token budget for the violations payload. When all violations fit, they are returned in full; otherwise the first page is returned. Default to 8000.

TYPE: int DEFAULT: 8000

max_files

Cap on the number of files to analyse. Default to 100.

TYPE: int DEFAULT: 100

enable_external_tools

Opt-in execution of external linters. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_runners

Permit temporary-runner strategies. Default to False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
BatchPage

A page with has_more=False when all violations were returned,

TYPE: BatchPage

BatchPage

or a page with a continuation cursor when more pages remain.

Example
# Single call for small repos; LLM paginates only when needed
page = await analyze_batch_auto("/repo", "python")
while page.has_more:
    page = await analyze_batch_auto("/repo", "python", cursor=page.cursor)
See Also

analyze_batch: Always-paginated variant with explicit cursor management. analyze_batch_summary: Compact overview (health score + hotspots) to decide if pagination is needed.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="analyze_batch_auto",
    title="Analyze repository (auto-routing)",
    description=(
        "Smart entry point for LLM agents: automatically decides between returning "
        "all violations at once (small repos) or paginating (large repos). "
        "Pass the returned cursor back to continue pagination if has_more is true. "
        "Prefer this over manually choosing between analyze_repository and analyze_batch."
    ),
    tags={"analysis", "batch", "pagination", "auto"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(BatchPage),
    task=BACKGROUND_TASK,
)
async def analyze_batch_auto(  # noqa: PLR0913
    path: str,
    language: str,
    cursor: str | None = None,
    max_tokens: int = 8000,
    max_files: int = 100,
    *,
    enable_external_tools: bool = False,
    allow_temporary_runners: bool = False,
) -> BatchPage:
    """Analyse a repository, routing automatically between full and paginated mode.

    This tool eliminates the need to choose between ``analyze_repository``
    (full, unbounded) and ``analyze_batch`` (always paginated).  It analyses
    the repository once, then decides:

    * If all violations fit within *max_tokens*: returns them all in a single
      page (``has_more=False``, ``cursor=None``).
    * If violations exceed *max_tokens*: returns the first token-budgeted page
      with a continuation cursor (``has_more=True``).

    Cursor continuation is handled identically to ``analyze_batch`` — pass the
    opaque cursor from the previous response unchanged.

    Args:
        path (str): Absolute or relative path to the repository root.
        language (str): Language identifier to restrict analysis (e.g. ``"python"``).
        cursor (str | None, optional): Opaque continuation token from a previous
            call.  Omit or pass ``None`` to start from the first page. Default to None.
        max_tokens (int, optional): Approximate token budget for the ``violations``
            payload.  When all violations fit, they are returned in full; otherwise
            the first page is returned. Default to 8000.
        max_files (int, optional): Cap on the number of files to analyse.
            Default to 100.
        enable_external_tools (bool, optional): Opt-in execution of external linters.
            Default to False.
        allow_temporary_runners (bool, optional): Permit temporary-runner strategies.
            Default to False.

    Returns:
        BatchPage: A page with ``has_more=False`` when all violations were returned,
        or a page with a continuation cursor when more pages remain.

    Example:
        ```python
        # Single call for small repos; LLM paginates only when needed
        page = await analyze_batch_auto("/repo", "python")
        while page.has_more:
            page = await analyze_batch_auto("/repo", "python", cursor=page.cursor)
        ```

    See Also:
        [`analyze_batch`][mcp_zen_of_languages.server.analyze_batch]:
            Always-paginated variant with explicit cursor management.
        [`analyze_batch_summary`][mcp_zen_of_languages.server.analyze_batch_summary]:
            Compact overview (health score + hotspots) to decide if pagination is needed.
    """
    canonical_language = _canonical_language(language)

    # Cursor continuation: delegate entirely to analyze_batch
    if cursor is not None:
        return await analyze_batch.fn(
            path,
            canonical_language,
            cursor=cursor,
            max_tokens=max_tokens,
            max_files=max_files,
            enable_external_tools=enable_external_tools,
            allow_temporary_runners=allow_temporary_runners,
        )

    results = await _analyze_repository_internal(
        path,
        [canonical_language],
        max_files,
        None,
        enable_external_tools=enable_external_tools,
        allow_temporary_runners=allow_temporary_runners,
    )
    files_total = len(results)
    all_violations = _build_batch_violations_list(results)

    # Estimate total token cost for all violations
    token_budget = max(1, max_tokens - _BATCH_ENVELOPE_TOKEN_OVERHEAD)
    total_estimated = sum(
        _estimate_tokens(json.dumps(bv.model_dump())) for bv in all_violations
    )

    if total_estimated <= token_budget:
        # Small enough: return everything at once
        return BatchPage(
            cursor=None,
            page=1,
            has_more=False,
            violations=all_violations,
            files_in_page=len({bv.file for bv in all_violations}),
            files_total=files_total,
        )

    # Too large: apply token-budget paging from the start (cursor=None → index 0)
    page_violations: list[BatchViolation] = []
    used_tokens = 0
    next_index = 0
    for bv in all_violations:
        serialised = json.dumps(bv.model_dump())
        cost = _estimate_tokens(serialised)
        if used_tokens + cost > token_budget:
            if not page_violations:
                next_index += 1
            break
        page_violations.append(bv)
        used_tokens += cost
        next_index += 1

    has_more = next_index < len(all_violations)
    next_cursor = _encode_cursor(next_index) if has_more else None
    files_in_page = len({bv.file for bv in page_violations})

    return BatchPage(
        cursor=next_cursor,
        page=1,
        has_more=has_more,
        violations=page_violations,
        files_in_page=files_in_page,
        files_total=files_total,
    )

generate_agent_tasks_tool async

generate_agent_tasks_tool(
    repo_path,
    languages=None,
    min_severity=5,
    *,
    enable_external_tools=False,
    allow_temporary_runners=False,
)

Convert repository-level violations into prioritised remediation tasks.

Agent workflows need structured, machine-readable work items — not prose reports. This tool analyses the repository via _analyze_repository_internal, extracts every violation whose severity meets min_severity, and transforms them into an AgentTaskList ordered by priority. Each task carries the file path, rule identifier, and a concise action description an automated agent can execute without further context.

PARAMETER DESCRIPTION
repo_path

Absolute path to the repository to scan. All eligible source files are discovered recursively.

TYPE: str

languages

Restrict scanning to these languages. Omit to analyse only Python files by default. Default to None.

TYPE: list[str] | None DEFAULT: None

min_severity

Severity floor (1-10 scale). Violations below this threshold are excluded from the task list. Default to 5.

TYPE: int DEFAULT: 5

enable_external_tools

Opt-in execution of allow-listed external tools while gathering repository analysis. Default to False.

TYPE: bool DEFAULT: False

allow_temporary_runners

Permit temporary-runner fallback strategies for external tools. Default to False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
AgentTaskList

AgentTaskList containing prioritised tasks ready for automated

TYPE: AgentTaskList

AgentTaskList

remediation, sorted from highest to lowest severity.

See Also

analyze_repository: Retrieves the raw per-file results that feed task generation. generate_prompts_tool: Provides human-readable remediation text rather than structured tasks.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="generate_agent_tasks",
    title="Generate agent tasks",
    description=(
        "Convert zen violations into structured agent task lists for automated "
        "remediation."
    ),
    tags={"agent", "tasks", "automation"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(AgentTaskList),
    task=BACKGROUND_TASK,
)
async def generate_agent_tasks_tool(
    repo_path: str,
    languages: list[str] | None = None,
    min_severity: int = 5,
    *,
    enable_external_tools: bool = False,
    allow_temporary_runners: bool = False,
) -> AgentTaskList:
    """Convert repository-level violations into prioritised remediation tasks.

    Agent workflows need structured, machine-readable work items — not
    prose reports.  This tool analyses the repository via
    ``_analyze_repository_internal``, extracts every violation whose
    severity meets *min_severity*, and transforms them into an
    ``AgentTaskList`` ordered by priority.  Each task carries the file
    path, rule identifier, and a concise action description an automated
    agent can execute without further context.

    Args:
        repo_path (str): Absolute path to the repository to scan.  All
            eligible source files are discovered recursively.
        languages (list[str] | None, optional): Restrict scanning to these
            languages.  Omit to analyse only Python files by default. Default to None.
        min_severity (int, optional): Severity floor (1-10 scale).  Violations
            below this threshold are excluded from the task list. Default to 5.
        enable_external_tools (bool, optional): Opt-in execution of allow-listed
            external tools while gathering repository analysis. Default to False.
        allow_temporary_runners (bool, optional): Permit temporary-runner fallback
            strategies for external tools. Default to False.

    Returns:
        AgentTaskList: AgentTaskList containing prioritised tasks ready for automated
        remediation, sorted from highest to lowest severity.

    See Also:
        [`analyze_repository`][mcp_zen_of_languages.server.analyze_repository]:
            Retrieves the raw per-file results that feed task generation.
        [`generate_prompts_tool`][mcp_zen_of_languages.server.generate_prompts_tool]:
            Provides human-readable remediation text rather than
            structured tasks.

    """
    repo_results = await _analyze_repository_internal(
        repo_path,
        languages=languages,
        enable_external_tools=enable_external_tools,
        allow_temporary_runners=allow_temporary_runners,
    )
    analysis_results = [entry.result for entry in repo_results]
    return build_agent_tasks(
        analysis_results,
        project=repo_path,
        min_severity=min_severity,
    )

check_architectural_patterns async

check_architectural_patterns(code, language)

Scan a code snippet for recognised architectural patterns.

Architectural pattern detection is not implemented yet.

PARAMETER DESCRIPTION
code

Source fragment to inspect for structural patterns.

TYPE: str

language

Language identifier guiding which pattern recognisers to apply (e.g. "python", "go").

TYPE: str

RAISES DESCRIPTION
NotImplementedError

Always raised until pattern detection support is implemented.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="check_architectural_patterns",
    title="Check architectural patterns",
    description="Return detected architectural patterns for a code snippet.",
    tags={"analysis", "patterns"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(PatternsResult),
)
async def check_architectural_patterns(code: str, language: str) -> PatternsResult:
    """Scan a code snippet for recognised architectural patterns.

    Architectural pattern detection is not implemented yet.

    Args:
        code (str): Source fragment to inspect for structural patterns.
        language (str): Language identifier guiding which pattern
            recognisers to apply (e.g. ``"python"``, ``"go"``).

    Raises:
        NotImplementedError: Always raised until pattern detection support
            is implemented.

    """
    msg = (
        "check_architectural_patterns is not implemented yet. "
        "Pattern detection is planned but not available in this release."
    )
    raise NotImplementedError(msg)

generate_report_tool async

generate_report_tool(
    target_path,
    language=None,
    perspective=PerspectiveMode.ALL,
    project_as=None,
    *,
    include_prompts=False,
    include_analysis=True,
    include_gaps=True,
    ctx=None,
)

Produce a structured markdown report combining analysis, gaps, and prompts.

Reports are the highest-level output the server offers. They stitch together violation analysis, coverage-gap summaries, and optional remediation prompts into a single ReportOutput whose markdown field is ready for rendering and whose data field carries the machine-readable payload.

Callers control which sections appear through the three boolean flags, making it easy to request a lightweight analysis-only snapshot or a full diagnostic document.

PARAMETER DESCRIPTION
target_path

Path to a single file or a directory. When a directory is given, all eligible files inside are analysed.

TYPE: str

language

Explicit language override. When omitted, the language is inferred from file extensions. Default to None.

TYPE: str | None DEFAULT: None

perspective

Requested report perspective. Default to PerspectiveMode.ALL.

TYPE: PerspectiveMode DEFAULT: PerspectiveMode.ALL

project_as

Projection-family target when perspective is projection.

TYPE: str | None DEFAULT: None

include_prompts

Append remediation prompt sections derived from build_prompt_bundle. Default to False.

TYPE: bool DEFAULT: False

include_analysis

Include the violation-analysis body showing per-rule findings. Default to True.

TYPE: bool DEFAULT: True

include_gaps

Include quality-gap and coverage-gap summaries highlighting areas that need attention. Default to True.

TYPE: bool DEFAULT: True

ctx

Optional FastMCP context used to emit progress and log updates for analyzed targets. Default to None.

TYPE: fastmcp.Context | None DEFAULT: None

RETURNS DESCRIPTION
ReportOutput

ReportOutput with markdown (rendered report text) and data

TYPE: ReportOutput

ReportOutput

(structured dict) ready for MCP client consumption.

See Also

analyze_zen_violations: Underlying snippet analysis powering the report body. generate_prompts_tool: Standalone prompt generation when a full report is not needed.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="generate_report",
    title="Generate report",
    description="Generate a markdown/json report with gap analysis and prompts.",
    tags={"reporting", "analysis"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(ReportOutput),
    task=BACKGROUND_TASK,
)
async def generate_report_tool(  # noqa: PLR0913
    target_path: str,
    language: str | None = None,
    perspective: PerspectiveMode = PerspectiveMode.ALL,
    project_as: str | None = None,
    *,
    include_prompts: bool = False,
    include_analysis: bool = True,
    include_gaps: bool = True,
    ctx: fastmcp.Context | None = None,
) -> ReportOutput:
    """Produce a structured markdown report combining analysis, gaps, and prompts.

    Reports are the highest-level output the server offers.  They stitch
    together violation analysis, coverage-gap summaries, and optional
    remediation prompts into a single ``ReportOutput`` whose ``markdown``
    field is ready for rendering and whose ``data`` field carries the
    machine-readable payload.

    Callers control which sections appear through the three boolean flags,
    making it easy to request a lightweight analysis-only snapshot or a
    full diagnostic document.

    Args:
        target_path (str): Path to a single file or a directory.  When a
            directory is given, all eligible files inside are analysed.
        language (str | None, optional): Explicit language override.  When omitted,
            the language is inferred from file extensions. Default to None.
        perspective (PerspectiveMode, optional): Requested report perspective.
            Default to ``PerspectiveMode.ALL``.
        project_as (str | None, optional): Projection-family target when ``perspective`` is ``projection``.
        include_prompts (bool, optional): Append remediation prompt sections derived
            from ``build_prompt_bundle``. Default to False.
        include_analysis (bool, optional): Include the violation-analysis body
            showing per-rule findings. Default to True.
        include_gaps (bool, optional): Include quality-gap and coverage-gap
            summaries highlighting areas that need attention. Default to True.
        ctx (fastmcp.Context | None, optional): Optional FastMCP context used to emit
            progress and log updates for analyzed targets. Default to None.

    Returns:
        ReportOutput: ReportOutput with ``markdown`` (rendered report text) and ``data``
        (structured dict) ready for MCP client consumption.

    See Also:
        [`analyze_zen_violations`][mcp_zen_of_languages.server.analyze_zen_violations]:
            Underlying snippet analysis powering the report body.
        [`generate_prompts_tool`][mcp_zen_of_languages.server.generate_prompts_tool]:
            Standalone prompt generation when a full report is not needed.

    """
    if ctx is not None:
        await _await_if_needed(
            ctx.log(f"Generating zen-of-languages report for {target_path}")
        )
        await _await_if_needed(ctx.report_progress(0, 1))

    report = generate_report(
        target_path,
        language=language,
        perspective=perspective,
        project_as=project_as,
        include_prompts=include_prompts,
        include_analysis=include_analysis,
        include_gaps=include_gaps,
    )
    if ctx is not None:
        await _await_if_needed(ctx.report_progress(1, 1))
    return ReportOutput(markdown=report.markdown, data=report.data)

export_rule_detector_mapping async

export_rule_detector_mapping(languages=None)

Export the live rule-to-detector wiring from the detector registry.

The registry maps each zen rule (e.g. PY-R001) to the detector class responsible for finding its violations. Exporting this mapping is useful for introspection dashboards, CI tooling that needs to know which rules are actively enforced, and documentation generators that want to list coverage per language.

PARAMETER DESCRIPTION
languages

Restrict the export to these language identifiers. When omitted, mappings for every registered language are returned. Default to None.

TYPE: list[str] | None DEFAULT: None

RETURNS DESCRIPTION
dict

Nested dictionary keyed by language, then by rule ID, with

TYPE: dict

dict

detector metadata (class name, config schema) as values.

See Also

get_supported_languages: Returns the same language keys but paired with detector IDs rather than full mapping metadata.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="export_rule_detector_mapping",
    title="Export rule detector mapping",
    description="Generate rule-detector mapping JSON from the live registry.",
    tags={"metadata", "mapping"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(dict[str, object]),
)
async def export_rule_detector_mapping(
    languages: list[str] | None = None,
) -> dict:
    """Export the live rule-to-detector wiring from the detector registry.

    The registry maps each zen rule (e.g. ``PY-R001``) to the detector
    class responsible for finding its violations.  Exporting this mapping
    is useful for introspection dashboards, CI tooling that needs to know
    which rules are actively enforced, and documentation generators that
    want to list coverage per language.

    Args:
        languages (list[str] | None, optional): Restrict the export to these
            language identifiers.  When omitted, mappings for every
            registered language are returned. Default to None.

    Returns:
        dict: Nested dictionary keyed by language, then by rule ID, with
        detector metadata (class name, config schema) as values.

    See Also:
        [`get_supported_languages`][mcp_zen_of_languages.server.get_supported_languages]:
            Returns the same language keys but paired with detector IDs
            rather than full mapping metadata.

    """
    from mcp_zen_of_languages.rules.mapping_export import build_rule_detector_mapping

    return build_rule_detector_mapping(languages)

get_config async

get_config()

Return a snapshot of the running server's configuration.

Combines the static values loaded from zen-config.yaml with any session-scoped overrides applied via set_config_override. Useful for MCP clients that need to display current thresholds or verify that an override took effect before launching an analysis run.

RETURNS DESCRIPTION
ConfigStatus

ConfigStatus describing active languages, severity threshold,

TYPE: ConfigStatus

ConfigStatus

resolved config file path, and a per-language map of overrides.

See Also

set_config_override: Mutates the runtime overrides reflected in this snapshot. clear_config_overrides: Resets all overrides so the snapshot matches zen-config.yaml.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="get_config",
    title="Get current configuration",
    description="Return the current server configuration including any runtime overrides.",
    tags={"config", "metadata"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(ConfigStatus),
)
async def get_config() -> ConfigStatus:
    """Return a snapshot of the running server's configuration.

    Combines the static values loaded from ``zen-config.yaml`` with any
    session-scoped overrides applied via ``set_config_override``.  Useful
    for MCP clients that need to display current thresholds or verify
    that an override took effect before launching an analysis run.

    Returns:
        ConfigStatus: ConfigStatus describing active languages, severity threshold,
        resolved config file path, and a per-language map of overrides.

    See Also:
        [`set_config_override`][mcp_zen_of_languages.server.set_config_override]:
            Mutates the runtime overrides reflected in this snapshot.
        [`clear_config_overrides`][mcp_zen_of_languages.server.clear_config_overrides]:
            Resets all overrides so the snapshot matches ``zen-config.yaml``.

    """
    return _build_config_status()

set_config_override async

set_config_override(
    language,
    max_cyclomatic_complexity=None,
    max_nesting_depth=None,
    max_function_length=None,
    max_class_length=None,
    max_line_length=None,
    severity_threshold=None,
)

Apply session-scoped threshold overrides for a specific language.

Overrides are stored in memory and survive until the server process exits or clear_config_overrides is called. Only the fields explicitly set are overridden — omitted fields retain their zen-config.yaml defaults. Calling this tool a second time for the same language replaces the previous override entirely.

PARAMETER DESCRIPTION
language

Language whose thresholds should be adjusted (e.g. "python").

TYPE: str

max_cyclomatic_complexity

Override the per-function cyclomatic-complexity ceiling. Default to None.

TYPE: int | None DEFAULT: None

max_nesting_depth

Override the maximum allowed nesting depth for control-flow blocks. Default to None.

TYPE: int | None DEFAULT: None

max_function_length

Override the maximum lines Default to None. permitted in a single function body.

TYPE: int | None DEFAULT: None

max_class_length

Override the maximum lines permitted in a single class definition. Default to None.

TYPE: int | None DEFAULT: None

max_line_length

Override the maximum character width for a single source line. Default to None.

TYPE: int | None DEFAULT: None

severity_threshold

Override the minimum severity at which violations are surfaced in results. Default to None.

TYPE: int | None DEFAULT: None

RETURNS DESCRIPTION
ConfigStatus

ConfigStatus reflecting all overrides after this mutation,

TYPE: ConfigStatus

ConfigStatus

confirming the change took effect.

See Also

get_config: Inspect the full configuration without mutating it.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="set_config_override",
    title="Set configuration override",
    description="Override configuration values for a specific language at runtime. Overrides persist for the session.",
    tags={"config", "settings"},
    annotations=MUTATING_ANNOTATIONS,
    output_schema=_output_schema(ConfigStatus),
)
async def set_config_override(  # noqa: PLR0913
    language: str,
    max_cyclomatic_complexity: int | None = None,
    max_nesting_depth: int | None = None,
    max_function_length: int | None = None,
    max_class_length: int | None = None,
    max_line_length: int | None = None,
    severity_threshold: int | None = None,
) -> ConfigStatus:
    """Apply session-scoped threshold overrides for a specific language.

    Overrides are stored in memory and survive until the server process
    exits or ``clear_config_overrides`` is called.  Only the fields
    explicitly set are overridden — omitted fields retain their
    ``zen-config.yaml`` defaults.  Calling this tool a second time for
    the same language **replaces** the previous override entirely.

    Args:
        language (str): Language whose thresholds should be adjusted
            (e.g. ``"python"``).
        max_cyclomatic_complexity (int | None, optional): Override the per-function
            cyclomatic-complexity ceiling. Default to None.
        max_nesting_depth (int | None, optional): Override the maximum allowed
            nesting depth for control-flow blocks. Default to None.
        max_function_length (int | None, optional): Override the maximum lines Default to None.
            permitted in a single function body.
        max_class_length (int | None, optional): Override the maximum lines
            permitted in a single class definition. Default to None.
        max_line_length (int | None, optional): Override the maximum character
            width for a single source line. Default to None.
        severity_threshold (int | None, optional): Override the minimum severity
            at which violations are surfaced in results. Default to None.

    Returns:
        ConfigStatus: ConfigStatus reflecting all overrides after this mutation,
        confirming the change took effect.

    See Also:
        [`get_config`][mcp_zen_of_languages.server.get_config]:
            Inspect the full configuration without mutating it.

    """
    language = _canonical_language(language)
    override = ConfigOverride(
        language=language,
        max_cyclomatic_complexity=max_cyclomatic_complexity,
        max_nesting_depth=max_nesting_depth,
        max_function_length=max_function_length,
        max_class_length=max_class_length,
        max_line_length=max_line_length,
        severity_threshold=severity_threshold,
    )
    _runtime_overrides[language] = override
    return _build_config_status()

clear_config_overrides async

clear_config_overrides()

Remove every session-scoped override, reverting to zen-config.yaml defaults.

After this call, get_config().overrides_applied will be empty and all subsequent analyses will use the thresholds defined in the static configuration file.

RETURNS DESCRIPTION
ConfigStatus

ConfigStatus after all override entries have been cleared.

TYPE: ConfigStatus

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="clear_config_overrides",
    title="Clear configuration overrides",
    description="Clear all runtime configuration overrides, reverting to zen-config.yaml defaults.",
    tags={"config", "settings"},
    annotations=MUTATING_ANNOTATIONS,
    output_schema=_output_schema(ConfigStatus),
)
async def clear_config_overrides() -> ConfigStatus:
    """Remove every session-scoped override, reverting to ``zen-config.yaml`` defaults.

    After this call, ``get_config().overrides_applied`` will be empty
    and all subsequent analyses will use the thresholds defined in the
    static configuration file.

    Returns:
        ConfigStatus: ConfigStatus after all override entries have been cleared.

    """
    _runtime_overrides.clear()
    return _build_config_status()

onboard_project async

onboard_project(
    project_path,
    primary_language="python",
    team_size="small",
    strictness="moderate",
    ctx=None,
)

Generate a step-by-step onboarding guide tailored to a project's profile.

The guide walks a new user through five stages — configuration file creation, IDE integration, baseline analysis, threshold tuning, and CI/CD wiring — with concrete examples customised for the selected primary_language and strictness level.

Three strictness presets are available:

  • relaxed — generous thresholds suited to legacy codebases.
  • moderate — balanced defaults for active development.
  • strict — tight limits for greenfield or high-quality projects.
PARAMETER DESCRIPTION
project_path

Absolute path to the project root, used to derive the project name and populate example commands.

TYPE: str

primary_language

Language used for example snippets and default pipeline selection (e.g. "python"). Default to "python".

TYPE: str DEFAULT: 'python'

team_size

Descriptive team-size hint ("small", "medium", "large"), reserved for future adaptive threshold scaling. Default to "small".

TYPE: str DEFAULT: 'small'

strictness

Preset name controlling all numeric thresholds ("relaxed", "moderate", or "strict"). Default to "moderate".

TYPE: str DEFAULT: 'moderate'

ctx

Optional FastMCP context used for elicitation when strictness or language values are ambiguous. Default to None.

TYPE: fastmcp.Context | None DEFAULT: None

RETURNS DESCRIPTION
OnboardingGuide

OnboardingGuide with ordered steps, each carrying an action key

TYPE: OnboardingGuide

OnboardingGuide

and example, plus a recommended_config dict ready to write

OnboardingGuide

into zen-config.yaml.

Example
guide = await onboard_project(
    "/home/dev/webapp", primary_language="typescript", strictness="strict"
)
for step in guide.steps:
    print(f"Step {step.step}: {step.title}")
See Also

set_config_override: Apply recommended thresholds at runtime without editing zen-config.yaml.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="onboard_project",
    title="Onboard a new project",
    description="Get interactive onboarding guidance for setting up zen analysis on a project. Returns recommended configuration based on project characteristics.",
    icons=ONBOARDING_TOOL_ICONS,
    tags={"onboarding", "setup"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(OnboardingGuide),
)
async def onboard_project(
    project_path: str,
    primary_language: str = "python",
    team_size: str = "small",
    strictness: str = "moderate",
    ctx: fastmcp.Context | None = None,
) -> OnboardingGuide:
    """Generate a step-by-step onboarding guide tailored to a project's profile.

    The guide walks a new user through five stages — configuration file
    creation, IDE integration, baseline analysis, threshold tuning, and
    CI/CD wiring — with concrete examples customised for the selected
    *primary_language* and *strictness* level.

    Three strictness presets are available:

    * **relaxed** — generous thresholds suited to legacy codebases.
    * **moderate** — balanced defaults for active development.
    * **strict** — tight limits for greenfield or high-quality projects.

    Args:
        project_path (str): Absolute path to the project root, used to
            derive the project name and populate example commands.
        primary_language (str, optional): Language used for example snippets and
            default pipeline selection (e.g. ``"python"``). Default to "python".
        team_size (str, optional): Descriptive team-size hint (``"small"``,
            ``"medium"``, ``"large"``), reserved for future adaptive
            threshold scaling. Default to "small".
        strictness (str, optional): Preset name controlling all numeric thresholds
            (``"relaxed"``, ``"moderate"``, or ``"strict"``). Default to "moderate".
        ctx (fastmcp.Context | None, optional): Optional FastMCP context used for
            elicitation when strictness or language values are ambiguous. Default to None.

    Returns:
        OnboardingGuide: OnboardingGuide with ordered steps, each carrying an action key
        and example, plus a ``recommended_config`` dict ready to write
        into ``zen-config.yaml``.

    Example:
        ```python
        guide = await onboard_project(
            "/home/dev/webapp", primary_language="typescript", strictness="strict"
        )
        for step in guide.steps:
            print(f"Step {step.step}: {step.title}")
        ```

    See Also:
        [`set_config_override`][mcp_zen_of_languages.server.set_config_override]:
            Apply recommended thresholds at runtime without editing
            ``zen-config.yaml``.

    """
    _ = team_size
    thresholds = {
        "relaxed": {
            "complexity": 15,
            "nesting": 5,
            "function_length": 100,
            "line_length": 120,
        },
        "moderate": {
            "complexity": 10,
            "nesting": 3,
            "function_length": 50,
            "line_length": 88,
        },
        "strict": {
            "complexity": 7,
            "nesting": 2,
            "function_length": 30,
            "line_length": 79,
        },
    }
    strictness_value = strictness
    if strictness_value not in thresholds and ctx is not None:
        response = await ctx.elicit(
            "Strictness is ambiguous. Choose one option:",
            response_type=["relaxed", "moderate", "strict"],
        )
        if response.action == "accept":
            strictness_value = str(response.data)
    if strictness_value not in thresholds:
        strictness_value = "moderate"
    t = thresholds[strictness_value]

    canonical_primary_language = _canonical_language(primary_language)
    supported = sorted(supported_languages())
    if canonical_primary_language not in supported and ctx is not None:
        response = await ctx.elicit(
            (
                f"Primary language '{primary_language}' is unsupported. "
                "Select a supported language:"
            ),
            response_type=supported,
        )
        if response.action == "accept":
            canonical_primary_language = _canonical_language(str(response.data))
    if canonical_primary_language not in supported:
        supported_list = ", ".join(supported)
        msg = (
            f"Unsupported language '{primary_language}'. "
            f"Supported languages: {supported_list}."
        )
        raise ValueError(msg)

    steps = [
        OnboardingStep(
            step=1,
            title="Configure zen-config.yaml",
            description=(
                "Create or update zen-config.yaml in your project root with "
                f"{strictness_value} settings."
            ),
            action="create_config",
            example=f"max_cyclomatic_complexity: {t['complexity']}",
        ),
        OnboardingStep(
            step=2,
            title="Define ignored files and folders",
            description=(
                "Create .zen-of-languages.ignore for local exclusions; "
                "existing .gitignore entries are also respected during scans."
            ),
            action="configure_ignore",
            example=".venv/\nnode_modules/\ndist/",
        ),
        OnboardingStep(
            step=3,
            title="Set up MCP client integration",
            description=(
                "Add the MCP server configuration to your editor or agent. "
                "For VS Code add `.vscode/mcp.json`, for Codex append to "
                "`~/.codex/config.toml`, and for GitHub Copilot write "
                "`.github/mcp.json` (repo) or `~/.copilot/mcp-config.json` "
                "(global). Use `mcp-zen-of-languages-cli init --mcp-target <target>` "
                "to scaffold any of these automatically."
            ),
            action="setup_mcp_client",
            example='{"servers":{"zen-of-languages":{"command":"uvx","args":["--from","mcp-zen-of-languages","mcp-zen-of-languages-server"]}}}',
        ),
        OnboardingStep(
            step=4,
            title="Run initial analysis",
            description="Analyze your codebase to establish a baseline of zen violations.",
            action="analyze",
            example=f"analyze_repository('{project_path}', languages=['{canonical_primary_language}'])",
        ),
        OnboardingStep(
            step=5,
            title="Review and adjust thresholds",
            description="Based on initial results, adjust thresholds using set_config_override if needed.",
            action="tune_config",
            example=f"set_config_override('{canonical_primary_language}', max_cyclomatic_complexity={t['complexity']})",
        ),
        OnboardingStep(
            step=6,
            title="Integrate MCP analysis in CI/CD",
            description=(
                "Use MCP tool calls in CI agents for continuous code quality "
                "monitoring; keep terminal CLI checks as optional fallback."
            ),
            action="ci_integration",
            example=(
                f"generate_agent_tasks('{project_path}', "
                f"languages=['{canonical_primary_language}'], min_severity=7)"
            ),
        ),
    ]

    return OnboardingGuide(
        project_name=project_path.rsplit("/", maxsplit=1)[-1],
        steps=steps,
        recommended_config={
            "language": canonical_primary_language,
            "max_cyclomatic_complexity": t["complexity"],
            "max_nesting_depth": t["nesting"],
            "max_function_length": t["function_length"],
            "max_line_length": t["line_length"],
            "severity_threshold": 5
            if strictness_value == "relaxed"
            else 6
            if strictness_value == "moderate"
            else 7,
        },
    )

get_supported_languages async

get_supported_languages()

List every language that has zen rules alongside its registered detector IDs.

This tool queries two registries at once: ZEN_REGISTRY (which holds the canonical zen principles per language) and the detector REGISTRY (which maps rule IDs to detector implementations). The result tells callers not just which languages are known, but how much detector coverage each language currently has.

RETURNS DESCRIPTION
dict[str, list[str]]

dict[str, list[str]]: Dictionary mapping each language identifier (e.g. "python")

dict[str, list[str]]

to the list of detector IDs wired up for that language.

See Also

detect_languages: Returns the configured language list from zen-config.yaml rather than the full set of languages with rules. export_rule_detector_mapping: Provides deeper mapping metadata including config schemas.

Source code in src/mcp_zen_of_languages/server.py
@mcp.tool(
    name="get_supported_languages",
    title="Get supported languages",
    description="Return list of all languages with zen rules and their detector coverage.",
    tags={"metadata", "languages"},
    annotations=READONLY_ANNOTATIONS,
    output_schema=_output_schema(dict[str, list[str]]),
)
async def get_supported_languages() -> dict[str, list[str]]:
    """List every language that has zen rules alongside its registered detector IDs.

    This tool queries two registries at once: ``ZEN_REGISTRY`` (which
    holds the canonical zen principles per language) and the detector
    ``REGISTRY`` (which maps rule IDs to detector implementations).
    The result tells callers not just *which* languages are known, but
    *how much* detector coverage each language currently has.

    Returns:
        dict[str, list[str]]: Dictionary mapping each language identifier (e.g. ``"python"``)
        to the list of detector IDs wired up for that language.

    See Also:
        [`detect_languages`][mcp_zen_of_languages.server.detect_languages]:
            Returns the *configured* language list from ``zen-config.yaml``
            rather than the full set of languages with rules.
        [`export_rule_detector_mapping`][mcp_zen_of_languages.server.export_rule_detector_mapping]:
            Provides deeper mapping metadata including config schemas.

    """
    from mcp_zen_of_languages.analyzers.registry import REGISTRY
    from mcp_zen_of_languages.rules import ZEN_REGISTRY

    result = {}
    for lang in ZEN_REGISTRY:
        detectors = [
            meta.detector_id
            for meta in REGISTRY.items()
            if meta.language in [lang, "any"]
        ]
        result[lang] = detectors
    return result

Tool models

mcp_zen_of_languages.models.AnalysisResult

Bases: BaseModel

Primary output produced by every language analyser.

A call to BaseAnalyzer.analyze() returns exactly one AnalysisResult. It bundles the computed metrics, the full violation list, and a composite health score into a single, JSON-serialisable envelope. The MCP server forwards this model directly to the client; the CLI formats it for terminal display.

Like Violation, this model supports bracket access (result["violations"]) so that legacy dict-oriented test assertions continue to pass without rewrites.

ATTRIBUTE DESCRIPTION
language

Language key used for analysis (e.g. "python").

TYPE: str

path

File path, or None for inline snippets.

TYPE: str | None

metrics

Computed complexity, maintainability, and LOC.

TYPE: Metrics

violations

Ordered list of detected zen-principle violations.

TYPE: list[Violation]

overall_score

Composite quality score from 0.0 (worst) to 10.0.

TYPE: float

rules_summary

Optional severity histogram for quick triage.

TYPE: RulesSummary | None

Example

result = AnalysisResult( ... language="python", ... path="app/routes.py", ... metrics=Metrics( ... cyclomatic=CyclomaticSummary(blocks=[], average=0.0), ... maintainability_index=80.0, ... lines_of_code=150, ... ), ... violations=[], ... overall_score=9.2, ... ) result["overall_score"] 9.2

See Also

Metrics: The numeric measurements embedded in this result. Violation: Individual issues inside the violations list. RepositoryAnalysis: Wraps an AnalysisResult with file metadata.

mcp_zen_of_languages.models.LanguagesResult

Bases: BaseModel

Enumeration of every language the server can currently analyse.

The list_zen_languages MCP tool returns a LanguagesResult so clients can discover which language keys are valid before calling analysis endpoints. The list is populated at startup from the AnalyzerFactory registry and stays stable for the lifetime of the server process.

ATTRIBUTE DESCRIPTION
languages

Sorted list of supported language identifiers.

TYPE: list[str]

Example

lr = LanguagesResult(languages=["python", "rust", "typescript"]) "python" in lr.languages True

See Also

AnalyzerFactory: The registry that defines available languages.

mcp_zen_of_languages.models.PatternsResult

Bases: BaseModel

Bundled response from the architectural-pattern detection pass.

The analyze_zen_patterns MCP tool returns a PatternsResult containing every pattern that was matched in the target code. An empty patterns list simply means no known patterns were detected — it is not an error condition.

ATTRIBUTE DESCRIPTION
patterns

Ordered list of detected pattern findings.

TYPE: list[PatternFinding]

Example

pr = PatternsResult( ... patterns=[ ... PatternFinding(name="observer", details="event bus in signals.py"), ... ] ... ) len(pr.patterns) 1

See Also

PatternFinding: Individual match carried inside the list.

mcp_zen_of_languages.models.RepositoryAnalysis

Bases: BaseModel

Per-file wrapper used when scanning an entire repository.

During a repository-wide analysis the server produces one RepositoryAnalysis per source file, pairing the file's path and detected language with the full AnalysisResult. Collecting these into a list gives the MCP client an iterable, JSON-friendly manifest of every file that was inspected.

ATTRIBUTE DESCRIPTION
path

Repository-relative path to the analysed file.

TYPE: str

language

Language key that the analyser factory resolved.

TYPE: str

result

Complete analysis output for this file.

TYPE: AnalysisResult

Example

entry = RepositoryAnalysis( ... path="lib/parser.py", ... language="python", ... result=analysis_result, ... ) entry.path 'lib/parser.py'

See Also

AnalysisResult: The per-file detail carried inside result. ProjectSummary: Aggregate statistics derived from all entries.

mcp_zen_of_languages.reporting.models.ReportOutput

Bases: BaseModel

Final output of the reporting pipeline.

Carries both a human-readable Markdown report and the equivalent machine- readable data dict so that consumers (CLI, MCP tools, CI integrations) can choose whichever representation fits their needs.

ATTRIBUTE DESCRIPTION
markdown

Fully rendered Markdown report text (normalised whitespace).

TYPE: str

data

Serialised dict mirroring the report structure for JSON output.

TYPE: dict[str, object]

mcp_zen_of_languages.server.ConfigOverride

Bases: BaseModel

Session-scoped override for a single language's analysis thresholds.

When an MCP client calls set_config_override, the supplied values are captured in a ConfigOverride instance and stored in the module-level _runtime_overrides dict, keyed by language. Only non-None fields are considered active — omitted fields leave the corresponding zen-config.yaml default in effect.

Note

Overrides do not persist across server restarts. Call clear_config_overrides to reset mid-session.

mcp_zen_of_languages.server.ConfigStatus

Bases: BaseModel

Read-only snapshot of the server's current configuration state.

Returned by get_config, set_config_override, and clear_config_overrides so callers can confirm the effective settings after every mutation. The overrides_applied field shows only the non-default values injected during the current session.

mcp_zen_of_languages.server.OnboardingStep

Bases: BaseModel

A single instruction in the guided onboarding sequence.

Each step pairs a human-readable title and description with an action key that MCP clients can use to trigger the corresponding operation programmatically, and an optional example showing concrete invocation syntax.

mcp_zen_of_languages.server.OnboardingGuide

Bases: BaseModel

Complete onboarding payload returned by onboard_project.

Bundles an ordered list of OnboardingStep entries with a recommended_config dict that reflects the thresholds appropriate for the caller's chosen strictness profile. MCP clients can render the steps as an interactive wizard or apply recommended_config directly to zen-config.yaml.