Model Context Protocol (MCP) is one thing you need to learn how to use – it’s a breakthrough in the AI space that will bring interesting developments in DeAI too. It was introduced by Anthropic in November last year, to the glee of developers worldwide.
This blog is intended for developers, including our own community. It delves into the architecture of MCP, lifecycle, security considerations, attacks to watch out for, and ways it’s being adopted.
FLock’s research team is exploring trends in a series of educational blogs to help FLockies stay ahead. Our last two explainers were on watermarking and reinforcement learning (RL) in LLMs – stay tuned for more.
What is MCP?
First off, a protocol is a set of rules that govern how systems (including AI systems) communicate, interact, and exchange information.
The MCP is a standardised interface that creates a two-way connection for AI agents to interact with real-world tools like APIs, apps, and files. Now, AI systems don’t need to be manually integrated with every new tool or API they use.
In other words, MCP is a universal translator and connector for AI agents. Instead of having to code integrations, you can just plug in the MCP.
AI agents can discover, understand, and use a new service on the fly – just like how we browse and install apps from an App Store. MCP allows AI models to understand external tools’ functionality through self-describing metadata, and interact with them autonomously, securely, and flexibly.
Why we needed MCP
Before MCP, we had to manually wire APIs. This was repetitive, error-prone, and not scalable. We had standardised plugin interfaces, which were often limited, platform-specific, and lacking memory/context. We also did Retrieval-Augmented Generation (RAG) which was great for fetching data, but useless for action-taking.
Then came MCP: a smarter, more autonomous way to integrate tools into AI workflows.
The architecture of MCP
MCP is made up of three core components:
1. MCP host (AI Application)
This is the AI-powered app that initiates everything. Think tools like Claude Desktop or Cursor IDE. It hosts the client, and manages communication with external servers.
2. MCP client (AI agent)
The MCP Client is essentially the brain. It lives inside the Host (e.g. Claude or ChatGPT), handling all communication with MCP servers. It discovers, understands, and executes tools, and also handles real-time updates and analytics on tool usage.
3. MCP server (external tool)
This is the third-party tool you want the AI to use. It exposes its APIs and data through a standardised, AI-readable format. It describes what it does, how it works, and how to interact with it. This includes tools (actual endpoints/actions), resources (data sources), and prompts (reusable templates for repetitive tasks).
The growing MCP landscape
MCP adoption is growing fast into an ecosystem of its own.
- Tool ecosystem: APIs across search, productivity, automation, devtools, and more are becoming MCP-compatible
- Framework support: LangChain, LlamaIndex, AutoGPT and others are leaning into MCP-style interfaces
- Open ecosystem: Anyone can host an MCP server
- Cross-model interoperability: Tools built for OpenAI can also work with Claude, Mistral, etc.
MCP server lifecycle
An MCP Server goes through three main phases, each with its own set of risks.
1. Creation
Registration is the first step, whereby it gets a unique name so clients can find it. Then, the installer is deployed and proper config files and code are set up. There is a code integrity check to prevent tampering or backdoors.
It comes with risks: name collision (imposter tools), installer spoofing (malware), and embedded backdoors.
2. Operation
This step executes tools as requested. It handles slash commands and dynamic inputs, and runs in a secure sandbox.
Risks include tool name confusion, overlapping commands and sandbox escapes via 3rd-party libraries.
3. Update
The update keeps permissions in sync, maintains version consistency, and removes outdated versions.
Risks: privilege persistence, re-deployment of unsafe versions, and configuration drift over time.
Security considerations
MCP is powerful, but its flexibility brings risk. There is no central security oversight, risking fragmented practices. You also need to consider how to maintain the workflow context, since multi-step tasks can lose coherence. In multi-tenant systems, there’s potential for data leakage and abuse. There can be authentication/authorisation gaps, with no unified way to manage user identities. Finally, debugging and monitoring gaps make it hard to trace attacks or tool misuse.
Attacks to watch out for
Some concerns to be wary of are: Malicious Code Execution (MCE) from injecting harmful scripts; Remote Access Control (RAC) from gaining unauthorised system access; and Credential Theft (CT) from leaking environment secrets.
These can be triggered through traditional prompt injection, or Retrieval Augmented Deception Attacks (RADE), which involves polluting trusted data sources.
For example:
Prompt Injection + RAC = system control
RADE + CT = stolen secrets via corrupted RAG sources
About FLock
FLock.io is a community-driven platform facilitating the creation of private, on-chain AI models. By combining federated learning with blockchain technology, FLock offers a secure and collaborative environment for model training, ensuring data privacy and transparency. FLock’s ecosystem supports a diverse range of participants, including data providers, task creators, and AI developers, incentivising engagement through its native FLOCK token.
MCP matters to FLock because we’re building a decentralized AI training platform where agents need to dynamically use different data sources, test and apply tools across environments, and orchestrate workflows securely across nodes.