
The Model Context Protocol (MCP) supports the development of AI agents that can perform complex tasks by interacting with multiple data sources and tools in real-time.
We’re in the age of AI assistants and AI agents. Increasingly, their roles are changing from entities designed to use a limited source of data to perform narrowly defined tasks or operations to ones where they are more autonomous and collaborative. The Model Context Protocol (MCP) is seen as a way to enable such a transformation.
Why? AI assistants and AI agents now must be able to work with more types of data as it becomes available and be part of a broader operational workflow within an organization. Where does MCP come in? As we’ve noted in an earlier RTInsights article: “MCP is an open standard for connecting AI assistants to external data sources and systems where data lives, including content repositories, business tools, and development environments.”
A Deeper Dive into MCP
MCP is designed to bridge the gap between large language models (LLMs) and the diverse data systems they interact with. Announced in November 2024, MCP provides a universal interface that enables AI systems to securely and efficiently access both local and remote data sources, such as databases, APIs, and file systems.
This standardization addresses the complexities and scalability issues associated with bespoke integrations, allowing developers to build AI applications that are more adaptable and context-aware.
For AI practitioners, data analysts, Chief Data Officers, and others, MCP supports the development of AI agents that can perform complex tasks by interacting with multiple data sources and tools in real-time.
Early adopters have utilized MCP to connect internal systems, enabling AI assistants to access proprietary knowledge bases and developer tools. Additionally, some development platforms are incorporating MCP to enhance their AI functionalities, allowing for more nuanced and efficient code generation and analysis.
Some in the industry believe MCP servers will soon become as essential to AI as APIs were to cloud computing. According to Wikipedia, the adoption of MCP by major AI providers, including OpenAI and Google DeepMind, underscores its potential as a foundational protocol for future AI integrations, promoting interoperability and reducing the overhead associated with custom solutions.
Industry Support for MCP
Many in the industry are embracing MCP. Over the last few months, numerous vendors have announced support for it or incorporated MCP into their offerings. A sample of the new MCP offerings is listed below.
(Note: This list is not meant to be a definitive collection of all MCP announcements. It is simply meant to demonstrate the broad support for MCP across different types of vendors and use cases.)
Alation announced the launch of its Agentic Platform. This reinvention of the data catalog for the AI era introduces the use of agents to automate and guide data discovery, governance, and compliance management. In addition, Alation also announced its AI Agent SDK, with support for Anthropic’s MCP, enabling partners and customers to build agents and applications leveraging the data intelligence capabilities of the Alation Platform.
Bedrock Security announced its MCP Server. The MCP Server will enable a secure, standardized gateway between AI agents and enterprise data, auditing model interactions, and allowing for the safe adoption of open agentic AI standards. It seamlessly integrates deep contextual knowledge of data, risk, and usage from the Bedrock Platform’s metadata lake directly into enterprise workflows and emerging agentic AI systems.
Boomi announced innovations designed to accelerate and scale intelligent automation across the enterprise. These innovations included the general availability of Boomi Agentstudio, new AI agents, the addition of Boomi Data Integration (formerly Rivery) to the Boomi Enterprise Platform, and support for Model Context Protocol (MCP).
CData announced the launch of MCP servers. CData MCP Servers (in Beta at the time of the announcement) enable AI tools like Claude to securely connect with dozens of enterprise data sources, including popular CRMs, ERPs, accounting solutions, collaboration platforms, traditional on-premises, and cloud data stores. The company plans to expand access to CData’s entire catalog of 350+ sources.
Cloudflare announced several new offerings to accelerate the development of AI agents. Cloudflare now enables developers to easily build and deploy powerful AI agents with the remote MCP server, generally available access to durable Workflows, and a free tier for Durable Objects. These offerings enable developers to build agents in minutes rather than months, simply, affordably, and at scale.
Dremio announced the launch of the Dremio MCP Server, a solution that brings AI-native data discovery and query capabilities to the lakehouse. By adopting the open MCP, Dremio enables AI agents to dynamically explore datasets, generate queries, and retrieve governed data in real time. Through MCP, Dremio natively integrates with leading AI models like Claude, enabling agents to seamlessly discover and query data with contextual understanding.
MindsDB announced support for the MCP across both its open source and enterprise platforms. This integration positions MindsDB as a unified AI data hub that standardizes and optimizes how AI models access enterprise data, dramatically simplifying artificial intelligence deployment in complex data environments. By implementing MCP, MindsDB now enables AI applications and agents to run federated queries over data stored in different databases and business applications as if they were a single database, eliminating a critical barrier to enterprise AI adoption.
Netwrix launched a free open-source MCP server integration for Netwrix Access Analyzer, enabling its customers to rapidly gain deep data security insights by connecting AI assistants to query data collected by the solution. With this integration, security teams can ask natural-language questions and get instant answers about who has access to what data, sensitive data exposure, stale accounts, and other risk indicators—without logging into dashboards or writing a single query.
Operant AI announced AI Gatekeeper, a new product that brings end-to-end runtime AI protection to enterprises deploying AI Applications and AI Agents, from Kubernetes to hybrid and private clouds. AI Gatekeeper not only brings Operant’s powerful 3D Defense capabilities beyond Kubernetes, but it also provides new defenses against rogue agents, including trust scores, agentic access controls, and threat blocking for MCPs and Agentic AI Non-Human Identities (NHIs).
Postman announced integrated support for MCP directly in its API platform. With this release, organizations can now create and send MCP requests using the familiar Postman API client and generate MCP servers from a network of 100,000 APIs. Whether calling a tool, composing a prompt, exploring a server’s available resources, or looking to build a server, it all works seamlessly inside Postman.
Reltioannounced the launch of the Reltio MCP Server. The solution allows LLM-powered agents, like those from Claude, Bedrock, or Vertex, to connect directly with Reltio’s trusted data through the open standard MCP. This means agents can search profiles, review matches, and update entities within the Reltio Platform in real time, without custom development, brittle connectors, or complex workarounds.
StarTree announced two new AI-native innovations to its real-time data platform for enterprise workloads: MCP support and vector embedding model hosting. These capabilities enable StarTree to power agent-facing applications, real-time Retrieval-Augmented Generation (RAG), and conversational querying at the speed, freshness, and scale enterprise AI systems demand.