airbyte_ops_mcp.mcp.regression_tests

MCP tools for connector regression tests.

This module provides MCP tools for triggering regression tests on Airbyte Cloud connections via GitHub Actions workflows. Regression tests can run in two modes:

  • Single version mode: Tests a connector version against a connection config
  • Comparison mode: Compares a target version against a control (baseline) version

Tests run asynchronously in GitHub Actions and results can be polled via workflow status.

Note: The term "regression tests" encompasses all connector validation testing. The term "live tests" is reserved for scenarios where actual Cloud connections are pinned to pre-release versions for real-world validation.

MCP reference

MCP primitives registered by the regression_tests module of the airbyte-internal-ops server: 1 tool(s), 0 prompt(s), 0 resource(s).

Tools (1)

run_regression_tests

Hints: open-world

Start a regression test run via GitHub Actions workflow.

This tool triggers the regression test workflow which builds the connector from the specified PR and runs tests against it.

Supports both OSS connectors (from airbytehq/airbyte) and enterprise connectors (from airbytehq/airbyte-enterprise). Use the 'repo' parameter to specify which repository contains the connector PR.

  • skip_compare=False (default): Comparison mode - compares the PR version against the baseline (control) version.
  • skip_compare=True: Single-version mode - runs tests without comparison.

If connection_id is provided, config/catalog are fetched from Airbyte Cloud. Otherwise, GSM integration test secrets are used.

Returns immediately with a run_id and workflow URL. Check the workflow URL to monitor progress and view results.

Requires GITHUB_CI_WORKFLOW_TRIGGER_PAT or GITHUB_TOKEN environment variable with 'actions:write' permission.

Parameters:

Name Type Required Default Description
connector_name string yes Connector name to build from source (e.g., 'source-pokeapi'). Required.
pr integer yes PR number to checkout and build from (e.g., 70847). Required. The PR must be from the repository specified by the 'repo' parameter.
repo enum("airbyte", "airbyte-enterprise") yes Repository where the connector PR is located. Use 'airbyte' for OSS connectors (default) or 'airbyte-enterprise' for enterprise connectors.
connection_id string | null no null Airbyte Cloud connection ID to fetch config/catalog from. If not provided, uses GSM integration test secrets.
skip_compare boolean no false If True, skip comparison and run single-version tests only. If False (default), run comparison tests (target vs control versions).
skip_read_action boolean no false If True, skip the read action (run only spec, check, discover). If False (default), run all verbs including read.
override_test_image string | null no null Override test connector image with tag (e.g., 'airbyte/source-github:1.0.0'). Ignored if skip_compare=False.
override_control_image string | null no null Override control connector image (baseline version) with tag. Ignored if skip_compare=True.
workspace_id string | enum("266ebdfe-0d7b-4540-9817-de7e4505ba61") | null no null Optional Airbyte Cloud workspace ID (UUID) or alias. If provided with connection_id, validates that the connection belongs to this workspace before triggering tests. Accepts '@devin-ai-sandbox' as an alias for the Devin AI sandbox workspace.
selected_streams array<string> | null no null List of stream names to include in the read. Only these streams will be included in the configured catalog. This is useful to limit data volume by testing only specific streams. If not provided, all streams are tested.
enable_debug_logs boolean no false Enable debug-level logging for regression test output. Also passed as LOG_LEVEL=DEBUG to the connector Docker container.
with_state boolean | null no null Fetch and pass the connection's current state to the read command, producing a warm read instead of a cold read. Defaults to True when connection_id is provided, False otherwise. Has no effect unless the command is read.

Show input JSON schema

{
  "additionalProperties": false,
  "properties": {
    "connector_name": {
      "description": "Connector name to build from source (e.g., 'source-pokeapi'). Required.",
      "type": "string"
    },
    "pr": {
      "description": "PR number to checkout and build from (e.g., 70847). Required. The PR must be from the repository specified by the 'repo' parameter.",
      "type": "integer"
    },
    "repo": {
      "description": "Repository where the connector PR is located. Use 'airbyte' for OSS connectors (default) or 'airbyte-enterprise' for enterprise connectors.",
      "enum": [
        "airbyte",
        "airbyte-enterprise"
      ],
      "type": "string"
    },
    "connection_id": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Airbyte Cloud connection ID to fetch config/catalog from. If not provided, uses GSM integration test secrets."
    },
    "skip_compare": {
      "default": false,
      "description": "If True, skip comparison and run single-version tests only. If False (default), run comparison tests (target vs control versions).",
      "type": "boolean"
    },
    "skip_read_action": {
      "default": false,
      "description": "If True, skip the read action (run only spec, check, discover). If False (default), run all verbs including read.",
      "type": "boolean"
    },
    "override_test_image": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Override test connector image with tag (e.g., 'airbyte/source-github:1.0.0'). Ignored if skip_compare=False."
    },
    "override_control_image": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Override control connector image (baseline version) with tag. Ignored if skip_compare=True."
    },
    "workspace_id": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "description": "Workspace ID aliases that can be used in place of UUIDs.\n\nEach member's name is the alias (e.g., \"@devin-ai-sandbox\") and its value\nis the actual workspace UUID. Use `WorkspaceAliasEnum.resolve()` to\nresolve aliases to actual IDs.",
          "enum": [
            "266ebdfe-0d7b-4540-9817-de7e4505ba61"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Optional Airbyte Cloud workspace ID (UUID) or alias. If provided with connection_id, validates that the connection belongs to this workspace before triggering tests. Accepts '@devin-ai-sandbox' as an alias for the Devin AI sandbox workspace."
    },
    "selected_streams": {
      "anyOf": [
        {
          "items": {
            "type": "string"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "List of stream names to include in the read. Only these streams will be included in the configured catalog. This is useful to limit data volume by testing only specific streams. If not provided, all streams are tested."
    },
    "enable_debug_logs": {
      "default": false,
      "description": "Enable debug-level logging for regression test output. Also passed as `LOG_LEVEL=DEBUG` to the connector Docker container.",
      "type": "boolean"
    },
    "with_state": {
      "anyOf": [
        {
          "type": "boolean"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Fetch and pass the connection's current state to the read command, producing a warm read instead of a cold read. Defaults to `True` when `connection_id` is provided, `False` otherwise. Has no effect unless the command is `read`."
    }
  },
  "required": [
    "connector_name",
    "pr",
    "repo"
  ],
  "type": "object"
}

Show output JSON schema

{
  "description": "Response from starting a regression test via GitHub Actions workflow.",
  "properties": {
    "run_id": {
      "description": "Unique identifier for the test run (internal tracking ID)",
      "type": "string"
    },
    "status": {
      "description": "Initial status of the test run",
      "enum": [
        "queued",
        "running",
        "succeeded",
        "failed"
      ],
      "type": "string"
    },
    "message": {
      "description": "Human-readable status message",
      "type": "string"
    },
    "workflow_url": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "URL to view the GitHub Actions workflow file"
    },
    "github_run_id": {
      "anyOf": [
        {
          "type": "integer"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "GitHub Actions workflow run ID (use with check_ci_workflow_status)"
    },
    "github_run_url": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Direct URL to the GitHub Actions workflow run"
    }
  },
  "required": [
    "run_id",
    "status",
    "message"
  ],
  "type": "object"
}

  1# Copyright (c) 2025 Airbyte, Inc., all rights reserved.
  2"""MCP tools for connector regression tests.
  3
  4This module provides MCP tools for triggering regression tests on Airbyte Cloud
  5connections via GitHub Actions workflows. Regression tests can run in two modes:
  6- Single version mode: Tests a connector version against a connection config
  7- Comparison mode: Compares a target version against a control (baseline) version
  8
  9Tests run asynchronously in GitHub Actions and results can be polled via workflow status.
 10
 11Note: The term "regression tests" encompasses all connector validation testing.
 12The term "live tests" is reserved for scenarios where actual Cloud connections
 13are pinned to pre-release versions for real-world validation.
 14
 15## MCP reference
 16
 17.. include:: ../../../docs/mcp-generated/regression_tests.md
 18    :start-line: 2
 19"""
 20
 21from __future__ import annotations
 22
 23__all__: list[str] = []
 24
 25import uuid
 26from datetime import datetime
 27from enum import Enum
 28from typing import Annotated, Any
 29
 30import requests
 31from airbyte.cloud import CloudWorkspace
 32from airbyte.cloud.auth import resolve_cloud_client_id, resolve_cloud_client_secret
 33from airbyte.exceptions import (
 34    AirbyteMissingResourceError,
 35    AirbyteWorkspaceMismatchError,
 36)
 37from fastmcp import FastMCP
 38from fastmcp_extensions import mcp_tool, register_mcp_tools
 39from pydantic import BaseModel, Field
 40
 41from airbyte_ops_mcp.constants import WorkspaceAliasEnum
 42from airbyte_ops_mcp.github_actions import trigger_workflow_dispatch
 43from airbyte_ops_mcp.github_api import (
 44    GITHUB_API_BASE,
 45    resolve_ci_trigger_github_token,
 46)
 47from airbyte_ops_mcp.mcp.prerelease import ConnectorRepo
 48
 49# =============================================================================
 50# GitHub Workflow Configuration
 51# =============================================================================
 52
 53REGRESSION_TEST_REPO_OWNER = "airbytehq"
 54REGRESSION_TEST_REPO_NAME = "airbyte-ops-mcp"
 55REGRESSION_TEST_DEFAULT_BRANCH = "main"
 56# Unified regression test workflow (handles both single-version and comparison modes)
 57REGRESSION_TEST_WORKFLOW_FILE = "connector-regression-test.yml"
 58
 59
 60# =============================================================================
 61# Workspace Validation Helpers
 62# =============================================================================
 63
 64
 65def validate_connection_workspace(
 66    connection_id: str,
 67    workspace_id: str,
 68) -> None:
 69    """Validate that a connection belongs to the expected workspace.
 70
 71    Uses PyAirbyte's CloudConnection.check_is_valid() method to verify that
 72    the connection exists and belongs to the specified workspace.
 73
 74    Raises:
 75        ValueError: If Airbyte Cloud credentials are missing.
 76        AirbyteWorkspaceMismatchError: If connection belongs to a different workspace.
 77        AirbyteMissingResourceError: If connection is not found.
 78    """
 79    client_id = resolve_cloud_client_id()
 80    client_secret = resolve_cloud_client_secret()
 81    if not client_id or not client_secret:
 82        raise ValueError(
 83            "Missing Airbyte Cloud credentials. "
 84            "Set AIRBYTE_CLOUD_CLIENT_ID and AIRBYTE_CLOUD_CLIENT_SECRET env vars."
 85        )
 86
 87    workspace = CloudWorkspace(
 88        workspace_id=workspace_id,
 89        client_id=client_id,
 90        client_secret=client_secret,
 91    )
 92    connection = workspace.get_connection(connection_id)
 93    connection.check_is_valid()
 94
 95
 96def _get_workflow_run_status(
 97    owner: str,
 98    repo: str,
 99    run_id: int,
100    token: str,
101) -> dict[str, Any]:
102    """Get workflow run details from GitHub API.
103
104    Args:
105        owner: Repository owner (e.g., "airbytehq")
106        repo: Repository name (e.g., "airbyte-ops-mcp")
107        run_id: Workflow run ID
108        token: GitHub API token
109
110    Returns:
111        Workflow run data dictionary.
112
113    Raises:
114        ValueError: If workflow run not found.
115        requests.HTTPError: If API request fails.
116    """
117    url = f"{GITHUB_API_BASE}/repos/{owner}/{repo}/actions/runs/{run_id}"
118    headers = {
119        "Authorization": f"Bearer {token}",
120        "Accept": "application/vnd.github+json",
121        "X-GitHub-Api-Version": "2022-11-28",
122    }
123
124    response = requests.get(url, headers=headers, timeout=30)
125    if response.status_code == 404:
126        raise ValueError(f"Workflow run {owner}/{repo}/actions/runs/{run_id} not found")
127    response.raise_for_status()
128
129    return response.json()
130
131
132# =============================================================================
133# Pydantic Models for Test Results
134# =============================================================================
135
136
137class TestRunStatus(str, Enum):
138    """Status of a test run."""
139
140    QUEUED = "queued"
141    RUNNING = "running"
142    SUCCEEDED = "succeeded"
143    FAILED = "failed"
144
145
146class TestOutcome(str, Enum):
147    """Outcome of a test (execution or comparison)."""
148
149    PENDING = "pending"
150    RUNNING = "running"
151    PASSED = "passed"
152    FAILED = "failed"
153    SKIPPED = "skipped"
154
155
156class ValidationResultModel(BaseModel):
157    """Result of a single validation check."""
158
159    name: str = Field(description="Name of the validation check")
160    passed: bool = Field(description="Whether the validation passed")
161    message: str = Field(description="Human-readable result message")
162    errors: list[str] = Field(
163        default_factory=list,
164        description="List of error messages if validation failed",
165    )
166
167
168class StreamComparisonResultModel(BaseModel):
169    """Result of comparing a single stream between control and target."""
170
171    stream_name: str = Field(description="Name of the stream")
172    passed: bool = Field(description="Whether all comparisons passed")
173    control_record_count: int = Field(description="Number of records in control")
174    target_record_count: int = Field(description="Number of records in target")
175    missing_pks: list[str] = Field(
176        default_factory=list,
177        description="Primary keys present in control but missing in target",
178    )
179    differing_records: int = Field(
180        default=0,
181        description="Number of records that differ between control and target",
182    )
183    message: str = Field(description="Human-readable comparison summary")
184
185
186class RegressionTestExecutionResult(BaseModel):
187    """Results from executing the connector (validations and record counts)."""
188
189    outcome: TestOutcome = Field(description="Outcome of the execution")
190    catalog_validations: list[ValidationResultModel] = Field(
191        default_factory=list,
192        description="Results of catalog validation checks",
193    )
194    record_validations: list[ValidationResultModel] = Field(
195        default_factory=list,
196        description="Results of record validation checks",
197    )
198    record_count: int = Field(
199        default=0,
200        description="Total number of records read",
201    )
202    error_message: str | None = Field(
203        default=None,
204        description="Error message if the execution failed",
205    )
206
207
208class RegressionTestComparisonResult(BaseModel):
209    """Results from comparing target vs control connector versions."""
210
211    outcome: TestOutcome = Field(description="Outcome of the comparison")
212    baseline_version: str | None = Field(
213        default=None,
214        description="Version of the baseline (control) connector",
215    )
216    stream_comparisons: list[StreamComparisonResultModel] = Field(
217        default_factory=list,
218        description="Per-stream comparison results",
219    )
220    error_message: str | None = Field(
221        default=None,
222        description="Error message if the comparison failed",
223    )
224
225
226class RegressionTestResult(BaseModel):
227    """Complete result of a regression test run."""
228
229    run_id: str = Field(description="Unique identifier for this test run")
230    connection_id: str = Field(description="The connection being tested")
231    workspace_id: str = Field(description="The workspace containing the connection")
232    status: TestRunStatus = Field(description="Overall status of the test run")
233    target_version: str | None = Field(
234        default=None,
235        description="Version of the target connector being tested",
236    )
237    baseline_version: str | None = Field(
238        default=None,
239        description="Version of the baseline connector (if comparison mode)",
240    )
241    evaluation_mode: str = Field(
242        default="diagnostic",
243        description="Evaluation mode used (diagnostic or strict)",
244    )
245    compare_versions: bool = Field(
246        default=False,
247        description="Whether comparison mode was used (target vs control)",
248    )
249    execution_result: RegressionTestExecutionResult | None = Field(
250        default=None,
251        description="Results from executing the connector (validations and record counts)",
252    )
253    comparison_result: RegressionTestComparisonResult | None = Field(
254        default=None,
255        description="Results from comparing target vs control connector versions",
256    )
257    artifacts: dict[str, str] = Field(
258        default_factory=dict,
259        description="Paths to generated artifacts (JSONL, DuckDB, HAR files)",
260    )
261    human_summary: str = Field(
262        default="",
263        description="Human-readable summary of the test results",
264    )
265    started_at: datetime | None = Field(
266        default=None,
267        description="When the test run started",
268    )
269    completed_at: datetime | None = Field(
270        default=None,
271        description="When the test run completed",
272    )
273    test_description: str | None = Field(
274        default=None,
275        description="Optional description/context for this test run",
276    )
277
278
279class RunRegressionTestsResponse(BaseModel):
280    """Response from starting a regression test via GitHub Actions workflow."""
281
282    run_id: str = Field(
283        description="Unique identifier for the test run (internal tracking ID)"
284    )
285    status: TestRunStatus = Field(description="Initial status of the test run")
286    message: str = Field(description="Human-readable status message")
287    workflow_url: str | None = Field(
288        default=None,
289        description="URL to view the GitHub Actions workflow file",
290    )
291    github_run_id: int | None = Field(
292        default=None,
293        description="GitHub Actions workflow run ID (use with check_ci_workflow_status)",
294    )
295    github_run_url: str | None = Field(
296        default=None,
297        description="Direct URL to the GitHub Actions workflow run",
298    )
299
300
301# =============================================================================
302# MCP Tools
303# =============================================================================
304
305
306@mcp_tool(
307    read_only=False,
308    idempotent=False,
309    open_world=True,
310)
311def run_regression_tests(
312    connector_name: Annotated[
313        str,
314        "Connector name to build from source (e.g., 'source-pokeapi'). Required.",
315    ],
316    pr: Annotated[
317        int,
318        "PR number to checkout and build from (e.g., 70847). Required. "
319        "The PR must be from the repository specified by the 'repo' parameter.",
320    ],
321    repo: Annotated[
322        ConnectorRepo,
323        "Repository where the connector PR is located. "
324        "Use 'airbyte' for OSS connectors (default) or 'airbyte-enterprise' for enterprise connectors.",
325    ],
326    connection_id: Annotated[
327        str | None,
328        "Airbyte Cloud connection ID to fetch config/catalog from. "
329        "If not provided, uses GSM integration test secrets.",
330    ] = None,
331    skip_compare: Annotated[
332        bool,
333        "If True, skip comparison and run single-version tests only. "
334        "If False (default), run comparison tests (target vs control versions).",
335    ] = False,
336    skip_read_action: Annotated[
337        bool,
338        "If True, skip the read action (run only spec, check, discover). "
339        "If False (default), run all verbs including read.",
340    ] = False,
341    override_test_image: Annotated[
342        str | None,
343        "Override test connector image with tag (e.g., 'airbyte/source-github:1.0.0'). "
344        "Ignored if skip_compare=False.",
345    ] = None,
346    override_control_image: Annotated[
347        str | None,
348        "Override control connector image (baseline version) with tag. "
349        "Ignored if skip_compare=True.",
350    ] = None,
351    workspace_id: Annotated[
352        str | WorkspaceAliasEnum | None,
353        "Optional Airbyte Cloud workspace ID (UUID) or alias. If provided with connection_id, "
354        "validates that the connection belongs to this workspace before triggering tests. "
355        "Accepts '@devin-ai-sandbox' as an alias for the Devin AI sandbox workspace.",
356    ] = None,
357    selected_streams: Annotated[
358        list[str] | None,
359        "List of stream names to include in the read. Only these streams will be included "
360        "in the configured catalog. This is useful to limit data volume by testing only "
361        "specific streams. If not provided, all streams are tested.",
362    ] = None,
363    enable_debug_logs: Annotated[
364        bool,
365        "Enable debug-level logging for regression test output. "
366        "Also passed as `LOG_LEVEL=DEBUG` to the connector Docker container.",
367    ] = False,
368    with_state: Annotated[
369        bool | None,
370        "Fetch and pass the connection's current state to the read command, "
371        "producing a warm read instead of a cold read. Defaults to `True` when "
372        "`connection_id` is provided, `False` otherwise. Has no effect unless "
373        "the command is `read`.",
374    ] = None,
375) -> RunRegressionTestsResponse:
376    """Start a regression test run via GitHub Actions workflow.
377
378    This tool triggers the regression test workflow which builds the connector
379    from the specified PR and runs tests against it.
380
381    Supports both OSS connectors (from airbytehq/airbyte) and enterprise connectors
382    (from airbytehq/airbyte-enterprise). Use the 'repo' parameter to specify which
383    repository contains the connector PR.
384
385    - skip_compare=False (default): Comparison mode - compares the PR version
386      against the baseline (control) version.
387    - skip_compare=True: Single-version mode - runs tests without comparison.
388
389    If connection_id is provided, config/catalog are fetched from Airbyte Cloud.
390    Otherwise, GSM integration test secrets are used.
391
392    Returns immediately with a run_id and workflow URL. Check the workflow URL
393    to monitor progress and view results.
394
395    Requires GITHUB_CI_WORKFLOW_TRIGGER_PAT or GITHUB_TOKEN environment variable
396    with 'actions:write' permission.
397    """
398    # Resolve workspace ID alias
399    resolved_workspace_id = WorkspaceAliasEnum.resolve(workspace_id)
400
401    # Generate a unique run ID for tracking
402    run_id = str(uuid.uuid4())
403
404    # Get GitHub token
405    try:
406        token = resolve_ci_trigger_github_token()
407    except ValueError as e:
408        return RunRegressionTestsResponse(
409            run_id=run_id,
410            status=TestRunStatus.FAILED,
411            message=str(e),
412            workflow_url=None,
413        )
414
415    # Validate workspace membership if workspace_id and connection_id are provided
416    if resolved_workspace_id and connection_id:
417        try:
418            validate_connection_workspace(connection_id, resolved_workspace_id)
419        except (
420            ValueError,
421            AirbyteWorkspaceMismatchError,
422            AirbyteMissingResourceError,
423        ) as e:
424            return RunRegressionTestsResponse(
425                run_id=run_id,
426                status=TestRunStatus.FAILED,
427                message=str(e),
428                workflow_url=None,
429            )
430
431    # Build workflow inputs - connector_name, pr, and repo are required
432    workflow_inputs: dict[str, str] = {
433        "connector_name": connector_name,
434        "pr": str(pr),
435        "repo": repo,
436    }
437
438    # Add optional inputs
439    if connection_id:
440        workflow_inputs["connection_id"] = connection_id
441    if skip_compare:
442        workflow_inputs["skip_compare"] = "true"
443    if skip_read_action:
444        workflow_inputs["skip_read_action"] = "true"
445    if override_test_image:
446        workflow_inputs["override_test_image"] = override_test_image
447    if override_control_image:
448        workflow_inputs["override_control_image"] = override_control_image
449    if selected_streams:
450        workflow_inputs["selected_streams"] = ",".join(selected_streams)
451    if enable_debug_logs:
452        workflow_inputs["enable_debug_logs"] = "true"
453    if with_state is True:
454        workflow_inputs["with_state"] = "true"
455    elif with_state is False:
456        workflow_inputs["with_state"] = "false"
457
458    mode_description = "single-version" if skip_compare else "comparison"
459
460    dispatch_result = trigger_workflow_dispatch(
461        owner=REGRESSION_TEST_REPO_OWNER,
462        repo=REGRESSION_TEST_REPO_NAME,
463        workflow_file=REGRESSION_TEST_WORKFLOW_FILE,
464        ref=REGRESSION_TEST_DEFAULT_BRANCH,
465        inputs=workflow_inputs,
466        token=token,
467    )
468
469    view_url = dispatch_result.run_url or dispatch_result.workflow_url
470    connection_info = f" for connection {connection_id}" if connection_id else ""
471    repo_info = f" from {repo}" if repo != ConnectorRepo.AIRBYTE else ""
472    return RunRegressionTestsResponse(
473        run_id=run_id,
474        status=TestRunStatus.QUEUED,
475        message=(
476            f"{mode_description.capitalize()} regression test workflow triggered "
477            f"for {connector_name} (PR #{pr}{repo_info}){connection_info}. View progress at: {view_url}"
478        ),
479        workflow_url=dispatch_result.workflow_url,
480        github_run_id=dispatch_result.run_id,
481        github_run_url=dispatch_result.run_url,
482    )
483
484
485# =============================================================================
486# Registration
487# =============================================================================
488
489
490def register_regression_tests_tools(app: FastMCP) -> None:
491    """Register regression tests tools with the FastMCP app.
492
493    Args:
494        app: FastMCP application instance
495    """
496    register_mcp_tools(app, mcp_module=__name__)