Skip to main content
Version: ADONIS 16-17/ADOIT 17/ADOGRC 13

Model Context Protocol (MCP)

The Model Context Protocol (MCP) is an open standard that describes how large language models (LLM) can connect with external tools, systems and data sources to have access to context-specific information (https://en.wikipedia.org/wiki/Model_Context_Protocol, https://modelcontextprotocol.io/introduction).

This article shows how the REST API can be used to provide external tools in a simple MCP scenario which consists of a local agent (implemented in Python) backed by an LLM (in this case OpenAI).

Scenarios

This section describes two scenarios in which ADOXX-based products (ADONIS and ADOIT) provide access to their data via their REST API.

MCP Scenario

note

IMPORTANT: These are exemplary scenarios to showcase the technical possibilities how the REST API of ADOxx-based products can be used in an MCP scenario. For sake of simplicity and compactness, the code described in this article uses Basic authentication, passwords in clear text in configuration files and direct search.

The description and files documented in this article are an exemplary scenario to serve as a starting point for implementing specific MCP scenarios.

In a productive scenario, it is required to adapt the provided example to follow the usual best practices and standards such as using OAuth 2.0 for authentication, not storing cleartext passwords, and considering the REST API Best Practices, etc.

ADONIS - Summarizing process models

In this scenario, a user should be able to ask about a certain task or a process contained within the ADONIS sample repository (e.g. "Tell me about creating a new customer").

Such a request will be taken over by an agent (implemented in Python) that is backed by an LLM (in this case OpenAI).

The LLM will instruct the agent to use ADONIS' REST API to look for a suitable model based on the user's request.

The agent will provide the search result to the LLM which (in case any model was found) will pick the best fitting model and instruct the agent to use ADONIS' REST API to retrieve the model's details.

The agent will provide those details to the LLM which will then summarize the model contents.

ADOIT - Finding application components

In this scenario, a user should be able to ask for applications within the ADOIT sample repository with a certain integrity, availability or confidentiality that are owned by a certain user (e.g. "Which applications with high availability are owned by arch?").

The request will be taken over by an agent (implemented in Python) that is backed by an LLM (in this case OpenAI).

The LLM will instruct the agent to use ADOIT's REST API to look for applications matching the user's request.

The agent will then show the data (name, ID, direct link, a summary of the description) of the found applications.

Prerequisites

To run this example, the following prerequisites have to be met:

pip install -r requirements.txt

Agent

The agent is the Python program that runs locally and processes the communication from the user and to the LLM.

The code for the agent is contained in https://github.com/BOC-Group/developer-examples/blob/master/usage/mcp/agent.py

When executed, the agent will ask the user for its input and then process it. When forwarding a user's request to the backing LLM, the agent will also include a description of the available tools (defined in the Server section - see below):

Click to view the code!

await session.initialize()

tools_response = await session.list_tools()
tools = tools_response.tools

functions = []
for tool in tools:
function = {
"name": tool.name,
"description": tool.description,
"parameters": tool.inputSchema
}
functions.append(function)

messages = [{
"role": "system",
"content": (
"You are an assistant that can use tools (functions) for interacting with a repository of graphical models and associated data (e.g. BPMN, ArchiMate, etc.). "
"If the user is asking a general question about a process or a task, try to use the provided tools (functions). "
"You should always rather use the available tools over answering from your own general knowledge."
"Only use general knowledge if a tool is not suitable or available. "
"IMPORTANT: If you respond using general knowledge and not a tool, explicitly state this at the beginning of your reply "
"by saying: 'Answer based on general knowledge — no tool was used.'")}]

The content of the message is important to describe to the LLM the scenario and the LLM's purpose in it. Depending on the use cases, this content has to be adapted.

As long as the user's input is not "exit", the agent will contact the LLM to process the input:

Click to view the code!

response = await openai_client.chat.completions.create(
model="gpt-4o",
messages=messages,
functions=functions,
function_call="auto"
)

Server

The server part in this case is the provider of the tools for the LLM to call via the agent.

ADONIS - Summarizing process models

The relevant code is contained in the file https://github.com/BOC-Group/developer-examples/blob/master/usage/mcp/server_ado.py. In this file, the functionality and description of the available tools for the LLM to call are contained.

Tool 'get_model_id'

This tool uses ADONIS' REST API to search for a business process diagram whose name is similar to a text chosen by the LLM based on the user's input.

Click to view the code!

@mcp.tool()
def get_model_id(text:str):
"""This tool searches for models by name and returns a list of potential matches along with their IDs.
Args:
text (str): The name or partial name of the model to search for.

Returns:
Dict[str, Any]: A dictionary containing possible model candidates, each with its ID and name.

Notes:
- Use this tool when the user mentions a model by name but you need its ID.
- Also always use this tool first when the user asks for a certain task or a process - do not fall back to general knowledge
- Always choose the most relevant match based on the user's intent or context.
- Never ask the user to choose a model, even if multiple models are possible.
- Always print out the name of the model you chose, even if there is only one result.
- If you think that none of the found models matches the user's request, state that no matching model was found and fall back to general knowledge"""
logging.info("get_model_id called with text: %s", text)

try:
query = f"""{{
scope:
{{
models:true
}},
attributes:["NAME"],
filters:
[
{{
className:"MT_BUSINESS_PROCESS_DIAGRAM_BPMN_20"
}},
{{
attrName:"NAME",
op:"OP_LIKE",
value:"{text}"
}}
]
}}"""
encoded_query = urllib.parse.quote(query)
url = f"{base_url}/rest/4.0/repos/{repo_id}/search?query={encoded_query}"
logging.debug("Encoded query URL: %s", url)
headers = {"Accept": "application/json"}
data = requests.get(url, headers=headers, auth=HTTPBasicAuth(the_user, password)).json()
return data

Tool 'get_model_information'

This tool uses ADONIS' REST API to retrieve the details for a specific business process diagram (whose was previously retrieved using the tool get_model_id.

Click to view the code!

@mcp.tool()
def get_model_information(text: str):
"""This tool retrieves detailed information about a specific model using its ID.

Args:
text (str): The ID of the model to retrieve.

Returns:
Dict[str, Any]: A dictionary containing the model's metadata, structure, and contents.

Notes:
- If the user provides a model name instead of an ID, use the tool 'get_model_id' first to retrieve the correct ID.
- If the user is asking a very general question or a question about a certain task or a process, they are referring to a specific model. In such cases, use the tool 'get_model_tool' first to retrieve the correct ID - do not fall back to general knowledge.
- When using this tool, in the response to the user, also give information about the exact number of contained tasks
- Alwas answer in the same structure:
-- First 'Metadata', then underneath:
--- 'Name': The model's name
--- 'ID': The model's ID
--- 'URL': The link to the model
--- 'Number of tasks': The exact number of tasks within the model
-- Then 'Summary': Your own summary of the model."""

logging.info("get_model_information called with ID: %s", text)
try:
url = f"{base_url}/rest/4.0/repos/{repo_id}/models/{text}?attribute=NAME&relation= "
logging.debug("Model information URL: %s", url)
headers = {"Accept": "application/json"}
data = requests.get(url, headers=headers, auth=HTTPBasicAuth(the_user, password)).json()
return data

ADOIT - Finding application components

The relevant code is contained in the file https://github.com/BOC-Group/developer-examples/blob/master/usage/mcp/server_ait.py. In this file, the functionality and description of the available tools for the LLM to call are contained.

Tool 'find_applications'

This tool uses ADOIT's REST API to retrieve a list of application components matching the user's request.

Click to view the code!

@mcp.tool()
def find_applications(attrName:str, attrValue:str, owner:str):
"""Use this tool if a user asks for an application or application component, e.g. "Which applications with high availability are owned by arch".
Args:
attrName (str): An attribute to look for. This should have one of the following values, depending on what the user asks for:
availability: "A_AVAILABILITY"
integrity: "A_INTEGRITY"
confidentiality: "A_CONFIDENTIALITY".
If the user does not ask for any of those attributes, the value should be an empty string.
attrValue (str): The value of the attribute to look for. This should have one of the following values, depending on the user's input (e.g. asking for standard availability would result in "Standard"):
standard: "Standard"
high: "High"
very high: "Very high"
owner (str): The name of the owner. If the user doesn't ask for an owner, this should be an empty string.
Returns:
Dict[str, Any]: A dictionary containing the found applications, each with its ID and name.

Notes:
- Use this tool when the user asks for an application or application component with certain properties.
- Also always use this tool first when the user asks for an application with certain properties - do not fall back to general knowledge
- For each found application, print out the name and the ID in your response"""
logging.info("find_applications called with attrName: %s", attrName)
logging.info("find_applications called with attrValue: %s", attrValue)
logging.info("find_applications called with owner: %s", owner)

try:
query = {}
scope = {}
scope["repoObjects"] = True
query["scope"] = scope
attributes = ["NAME", "A_DESCRIPTION"]
query["attributes"] = attributes
classFilter = {}
classFilter["className"] = "C_APPLICATION_COMPONENT"
filters = [classFilter]
if len(attrName) > 0:
attrFilter = {}
attrFilter["attrName"] = attrName
attrFilter["op"] = "OP_EQ"
attrFilter["value"] = attrValue
filters.append (attrFilter)
if len(owner) > 0:
ownerFilter = {}
ownerFilter["relName"] = "RC_IS_APPLICATION_OWNER"
ownerFilter["op"] = "OP_LIKE"
ownerFilter["value"] = owner
filters.append (ownerFilter)
query["filters"] = filters
encoded_query = urllib.parse.quote(json.dumps(query))
url = f"{base_url}/rest/4.0/repos/{repo_id}/search?query={encoded_query}"
logging.info("Encoded query URL: %s", url)
headers = {"Accept": "application/json", "Prefer": "rest_links=true"}
data = requests.get(url, headers=headers, auth=HTTPBasicAuth(user, password)).json()
return data

Usage

To use the scenarios, the necessary parameters in the file .env have to be set and ADONIS or ADOIT has to run and their REST API has to be available to access by the agent.

To start the agent, execute

python agent.py

This will result in the agent starting up and waiting for user input.

ADONIS

Input can be provided by asking for a certain task, e.g. "Tell me about creating a new customer".

This should result in the agent contacting the LLM which will first search for the model, then retrieve the model details and provide a summary. Whenever a tool is called, this is shown by the agent on the console:

MCP Usage ADONIS

ADOIT

Input can be provided by asking for applications, e.g. "Which applications with high availability are owned by arch".

This should result in the agent contacting the LLM which will search for matching applications and showing their data to the user. Whenever a tool is called, this is shown by the agent on the console:

MCP Usage ADOIT

Logging

Logs are created in the subfolder logs. There are three different types of log files:

  • agent_<TIMESTAMP>.log: Contains information related to the agent, e.g. errors when trying to reach the LLM, http requests sent to the LLM or when calling functions.
  • server_<TIMESTAMP>.log: Contains information about internal processing when one of the provided functions is executed.
  • messages_<TIMESTAMP>.json: Contains the messages that are sent or received by the agent.