There’s no ignoring the AI gold rush.
In 2024, 78% of organizations reported using AI, a 23 percentage-point increase from the year prior. The number of those experimenting with AI in 2025 is surely even higher.
But that doesn’t mean all AI applications are necessarily successful. AI ROI is famously embarrassing, with 90% of 1,150 technology leaders in a new Apptio survey admitting they’re “struggling to measure ROI on their investments.”
“90% of 1,150 technology leaders in a new Apptio survey admitting they’re ‘struggling to measure ROI on their investments.’”
For many, the problem is moving from prototyping to at-scale production.
New AI coding assistants aimed at non-coders (such as Lovable, Replit, and Bolt) significantly exacerbate the problem. Lowering the barrier to entry means virtually anyone can build an AI application, but dreaming up ideas isn’t the same as preparing them for real-world, enterprise-grade deployment. It’s no wonder nearly two-thirds of survey respondents told IBM their organizations have yet to begin scaling AI.
The bumpy road from prototyping to production
Playing around with AI prototypes is exciting, but all that experimentation is ultimately meaningless if those applications can’t meet the high availability, data sovereignty, and security and compliance requirements necessary to support real-world enterprise operations, particularly for highly regulated industries, like finance or healthcare.
And when moving from prototype to production, plenty of pitfalls await developers along the way.
Database limitations
Traditional databases for handling transactions and queries simply weren’t designed to support AI applications, lacking crucial functionalities such as vector similarity search, hybrid ranking, and semantic retrieval.
As a workaround, some organizations turn to specialized vector databases optimized for high-dimensional data. But while these systems may get the job done in the prototyping stage, things start to come apart when it’s time for at-scale production, where enterprise-grade security and compliance are paramount.
New Postgres-based cloud services are another way to tackle prototyping, but face similar limitations when moving to production and integrating with existing databases. Many regulated enterprises simply can’t allow their data and applications to reside in a proprietary cloud service. And to be clear, most Postgres cloud services are proprietary even if the core Postgres database they run is open source.
Perhaps most importantly, if AI applications are ever to graduate from promising prototypes to truly useful, at-scale solutions, they must integrate with organizations’ existing databases. While it is possible to migrate legacy databases to the cloud, the process is time-consuming and costly, and there’s no guarantee that new cloud environments can accommodate strict security and compliance requirements.
Integration complexity
Creating modern agentic AI, RAG, and other AI applications has often required piecing together a complicated and brittle array of tools, APIs, and data pipelines. This further complicates the journey from prototyping to production.
For example, creating a chatbot on top of existing knowledge bases requires cobbling together a diverse set of APIs, data sources, chunking data pipelines, etc. Let’s assume you opt for Postgres as the data infrastructure, a common move considering the system’s reliability, flexibility, and popularity with developers. Doing so means integrating custom tooling, integrations, and workflows to move the chat from its prototyping environment to production-ready Postgres infrastructure — a complex undertaking that slows scaling and risks introducing compliance gaps.
Security and compliance complications
At the end of the day, for organizations operating in highly regulated industries (e.g., finance, healthcare, or government), all roads lead back to security and compliance requirements, non-negotiables that typical prototyping environments can’t fulfill.
For example, to be truly production-ready, AI applications must run in environments that support audit trails to show who accesses what data; data encryption to protect sensitive information; and role-based access controls to restrict data access to authorized persons, along with industry-relevant compliance certifications, e.g., HIPAA, SOC 2, GDPR, etc.
Data sovereignty is another concern for regulated enterprises.
Take organizations handling European consumer data. To meet GDPR requirements, this information can’t reside in US data centers, meaning organizations need infrastructure that supports regional data residency with distributed, multi-region databases.
Where MCP fits in
Considering the swell of AI applications and the persistent challenges moving from prototyping to production, it’s a wonder more vendors aren’t stepping up to help ease the transition.
Until recently, there has been no dedicated Postgres vendor positioning itself as fully focused on AI integration. More specifically, there’s been no vendor offering a fully featured, fully supported MCP (Model Context Protocol) server that works with existing Postgres databases.
Yes, there are several Postgres MCP servers available, but most of them are tied to vendors’ own cloud database offerings, thus pushing vendor lock-in and stifling flexibility.
This is no small problem given the MCP servers’ increasingly important role in developing and operationalizing AI applications.
“Sans MCP, developers have to configure custom connectors to get AI agents to integrate with different databases, APIs, workflow engines, and the like — a manageable maneuver in the prototyping stage that simply isn’t feasible when it’s time to move to production and scale.”
Anthropic’s open-source MCP has quickly become the standard for connecting AI agents and applications to external data sources and tools, marking a significant step forward in addressing thorny integration challenges.
Sans MCP, developers have to configure custom connectors to get AI agents to integrate with different databases, APIs, workflow engines, and the like — a manageable maneuver in the prototyping stage that simply isn’t feasible when it’s time to move to production and scale.
With MCP servers, developers can standardize interactions across systems and eliminate custom integrations. Still, this requires advanced planning; if organizations want to productionize and scale AI applications, they must prioritize infrastructure that supports MCP.
Beyond integration, database architecture determines what’s possible
Implementing MCP servers may help solve the integration problems stalling at-scale AI production, but what about the data foundation underneath?
Supporting enterprise-grade AI applications at scale requires a production-ready database architecture that can provide high availability, global distribution, security, and compliance.
For many organizations in highly regulated, mission-critical industries, Postgres is already the common choice, but it’s only one part of the equation. To move AI applications from prototype to production, organizations also need infrastructure that meets both enterprise requirements and integrates with existing databases.
The pgEdge Agentic AI Toolkit for Postgres is an enterprise-grade Postgres-based infrastructure that enables this production-ready architecture, helping developers productionize AI applications with enterprise-grade availability, security, data sovereignty, and global deployment.
Fully open-source and compatible with any standard version of Postgres, the toolkit can be deployed on-premises, in self-managed cloud accounts, or in the upcoming pgEdge managed cloud service, providing deployment flexibility without locking organizations into pgEdge’s Postgres offerings.
To get up and running, developers just download the toolkit and configure Claude Code or their favorite agents to use the pgEdge MCP server, a highly performant and fully featured MCP server that provides AI applications with secure access to both new and existing Postgres databases so organizations can easily move AI workloads from prototype to production at scale.
Abandoning the gold rush for real ROI
The AI gold rush has made us flush with AI prototypes but wanting for production-ready applications that can scale.
With its new Agentic AI Toolkit, pgEdge combines distributed, enterprise-grade Postgres infrastructure with a suite of AI tools, giving teams a way to move AI applications from experimental prototypes to enterprise-grade, at-scale production and finally abandon the mythical gold rush for real ROI.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.
