Step-by-step guides for the problems that actually block AI adoption in mission-critical environments. Not theory — named patterns that map directly to real controls and real systems.
A step-by-step process for going from a list of internal APIs to a live, verified MCP Connector in production. Start small, validate the governance chain, then expand with confidence.
Inventory your APIs
List every API agents could benefit from accessing. Categorise by data sensitivity: public, internal, sensitive. Note which have OpenAPI specs.
Choose your first candidate
Pick the API with the clearest agent use case and the lowest data sensitivity. This is your pilot — validate the governance chain before expanding.
Generate the MCP Connector
Point ARK360 at the OpenAPI spec. It generates a verified MCP tool with authentication, logging and tags. You review and approve the connector definition.
Apply baseline Policy Presets
Apply a baseline preset: rate limits, audit logging, IP allowlisting. For sensitive data, add data residency and access scoping rules.
Connect one agent host
Wire one agent (Claude, Copilot, or your own) to the new MCP Connector. Monitor the first calls in the operations dashboard.
Validate and expand
Review the audit trail. Confirm the governance chain works end-to-end. Then expand: more APIs, more agents, more Policy Presets.
A practical approach to working with security, compliance and legal teams who are blocking AI adoption — turning their requirements into Policy Presets they can review, verify and own.
Map the blockers
List every concern your security and compliance team has raised. These will typically fall into: data access, data residency, auditability, and blast radius.
Translate concerns into controls
For each blocker, identify the corresponding control. 'Data leaving Australia' → AU-only residency preset. 'Agents reading everything' → least-privilege connector scoping.
Show them the dashboard
Demonstrate the operations dashboard. Security needs to see that every agent call is logged, every policy applied, and blocked calls are visible. Evidence beats assurances.
Map to your compliance framework
Whether it's APS/ISM, ISO 27001, WHS, or your own risk framework — map each Policy Preset to the specific control it satisfies. Document this for the risk register.
Run a controlled pilot
Propose a time-boxed pilot: one API, one agent, one Policy Preset, 30 days. Give security observer access to the dashboard. Let the evidence speak.
Your core systems don't need to change. This playbook shows how to connect AI agents to legacy and mission-critical systems by wrapping at the API layer, limiting blast radius, and keeping the underlying system completely untouched.
Identify the AI use case
What does the agent need to do? Query, summarise, surface, alert? The use case determines what API operations are needed — and that determines the connector scope.
Wrap the system at the API layer
Don't touch the core system. Generate an MCP Connector from the existing API. The core system doesn't know it's talking to an AI — it just sees authenticated API calls.
Limit the blast radius
Scope the MCP Connector to read-only operations where possible. For write operations, add approval policies: the agent proposes, a human confirms.
Apply mission-critical guardrails
Rate limits prevent runaway agent calls. Audit logging creates the paper trail. Policy Presets enforce least-privilege access. The core system stays stable.
Monitor and iterate
Use the operations dashboard to understand how the agent is using the system. Tighten or relax controls based on real usage patterns, not guesswork.
Keep going
See the real blockers these patterns solve
Government data sovereignty. Enterprise blast radius. Safety auditability. Concrete scenarios, governed solutions.
Explore use casesSee what the governance layer actually does
MCP Connectors, Policy Presets, operations dashboard — and how it fits your existing Azure stack.
See the product