In November 2022, OpenAI released ChatGPT, a groundbreaking AI language model that quickly captured widespread attention for its ability to generate human-like text. This release marked a pivotal moment in artificial intelligence, demonstrating the potential of large language models to assist with a wide range of tasks.
By March 2023, OpenAI expanded ChatGPT’s ecosystem with an API and plugin system. These additions enabled developers to integrate ChatGPT’s capabilities into their own applications, allowing seamless interaction with live data and third-party services. Competitors like Google quickly followed with their own APIs and integration methods for large language models.
In July 2024, I demonstrated this integration pattern at Google I/O Extended Fortaleza, where I built a chatbot using Vertex AI, Firebase, and the Gemini 1.5 model. The implementation leveraged the Function Calling feature, similar to ChatGPT’s API, to enable interaction with external data sources.
The demo below shows it in action. A chat that is based on knowing when to ask for user’s actions like post suggestions (texts are in PT-BR).
The Integration Challenge
While Function Calling/Tools proved powerful, they revealed a critical limitation as applications scaled: they required writing custom wrappers for every API you want to integrate. With each service having its own unique API, developers faced the daunting task of creating and maintaining bespoke code for each integration.
This challenge called for a standardized approach to connect AI applications with external services and data sources. Almost exactly one year ago, in November 2024, Anthropic introduced the Model Context Protocol (MCP), an open standard designed to address these challenges.

MCP Core Concepts
MCP architecture centers on three components: host, client, and server. Here’s how they work together:
MCP Host
The host is the AI application itself (what we typically call an AI Agent). It integrates one or more MCP clients to interact with various MCP servers, orchestrating the overall workflow with LLM models.

MCP Server
An MCP server exposes capabilities through a standardized API. Consider Asana’s project management tool: their MCP server (currently in beta) enables AI agents to interact with Asana resources, such as projects and tasks. This enables complete workflow automation, including creating projects, adding tasks, assigning team members, and updating statuses.
The server API consists of three elements:
- Prompts: Templates for generating messages to send to models
- Resources: Contextual information for LLMs, such as files and database schemas
- Tools: Executable functionalities exposed to clients (similar to function calls in ChatGPT’s API)
Asana’s MCP server provides tools like:
asana_create_project: Create new projectsasana_create_task: Add comments to tasks
Each tool includes a name, description, and parameters, enabling the LLM to determine when and how to invoke it.
MCP Client
The agent orchestrates the interaction between servers and LLM models. Claude, for example, contains an MCP client within it, making it both an AI Agent and an MCP host. Integrating with Asana’s MCP server requires just one command:
claude mcp add --transport sse asana https://mcp.asana.com/sseAfter configuration, typing /mcp and selecting asana initiates the authentication flow. Once credentials are provided, Claude can interact directly with your Asana workspace.

Projects and tasks can be created simply by asking Claude to do so:

Flow of Interactions
MCP-based architectures typically follow these steps:
- The MCP host (AI application) boots up and initializes its MCP clients
- Each MCP client connects to its respective MCP server and fetches available tools
- When users interact with the host, it prompts the LLM with relevant context
- The LLM processes input and determines whether to invoke any tools
- If tool invocation is needed, the host sends requests through the MCP client to the MCP server with appropriate parameters
- The MCP server processes requests, performs actions, and returns results via the MCP client
- The host presents results to users
The Woes and Joys of MCP
MCP’s core strength lies in isolating AI agents from tool definitions, providing flexibility, and enabling interoperability. However, MCP is simply a protocol, it doesn’t prescribe how to build robust agents. All concerns about building quality AI applications remain your responsibility, often requiring additional abstractions.
While major AI players support MCP, servers introduce security considerations, particularly from the client perspective.
For MCP clients: Only connect to trusted MCP servers. Currently, no official registry or certification authority is validating MCP servers—similar to how you shouldn’t install random packages, you shouldn’t connect to untrusted servers.
For MCP server implementations: Expect that clients may send malformed requests or attempt exploits. Proper input validation and sanitization are essential for maintaining security and integrity.
Conclusion
MCP has become the "lingua franca" for AI agents by providing a standardized protocol that eliminates the need for custom API wrappers, enabling seamless integration between AI applications and external services.
While it offers flexibility and interoperability, its adoption requires careful security considerations, as connecting to untrusted servers poses risks similar to installing unverified software packages. The protocol effectively decouples AI agents from tool definitions, but developers remain responsible for building robust, secure implementations.
We want to work with you. Check out our Services page!

