- The Artificially Intelligent Enterprise
- Posts
- Building EnterpriseGPT
Building EnterpriseGPT
Applying lessons from cloud computing to AI
AI is starting to look a lot like early cloud computing: full of promise, poorly managed, and scaling faster than most companies can handle.
AI-generated content is appearing in HR policies, sales decks, executive reports, and customer communications—often without anyone knowing where it came from or who approved it.
It’s a replay of the cloud era, when business units adopted tools outside IT’s control, giving rise to Shadow IT.
Today, we’re entering the age of Shadow AI: employees using generative tools without standards, oversight, or accountability.
The consequences are already visible—duplicated work, brand inconsistencies, and exposure to legal and compliance risks.
Enterprises eventually learned to manage cloud sprawl with budgets, policies, and platform strategies.
AI will require the same treatment.
If organizations want to move beyond pilot projects and into productivity, they’ll need to bring Shadow AI into the light—and fast.
That’s why I am advocating for EnterpriseGPT. Read on to get my take.

🔮 AI Lesson - Write Better Prompts With ChatGPT
🎯 The AI Marketing Advantage - We tested OpenAI's New Models—Here's What Stood Out
🎙️ AI Confidential Podcast - Unleashing the Power of Agents with the "Forrest Gump of Tech”
I have been conducting a lot of training lately, with over 1,000 live attendees. This edition of the AIE was inspired by a talk I gave at a local meetup on Building an Open Source EnterpriseGPT (here are the slides).
I also have a couple of other learning opportunities that I’d like to invite you to in the next couple of weeks. Melanie McGlaughlin’s session “Building Better with Claude in the Console” on April 22nd, followed by my own Prompt Engineering Workshop on April 29th.


Building EnterpriseGPT
Applying lessons from cloud computing to AI
Artificial Intelligence (AI) is no longer a future prospect but a present-day force fundamentally reshaping enterprise operations. As organizations navigate this transformation, developing effective, secure, and tailored AI solutions requires a deep understanding of its core components—from data strategy and management to model selection and deployment—and strategic decisions about infrastructure control.
The current trajectory of AI adoption mirrors the evolution of cloud computing. Initially, enterprises approached the cloud with caution, wary of vendor lock-in, security vulnerabilities, and compliance challenges. This rational skepticism stemmed from the inherent risks of ceding control over critical assets.
Driven by competitive pressures and innovation, cloud migration accelerated, only to reveal new complexities—particularly around cost management. Unpredictable expenses led many organizations to re-evaluate, resulting in cloud repatriation efforts aimed at regaining control over spending, performance, and security.
AI is now traversing a similar path. Enterprises are eagerly integrating AI capabilities, often leveraging third-party platforms and public APIs. While this accelerates adoption, it simultaneously surfaces significant concerns regarding data sovereignty, security, and long-term financial viability.
Learning from the cloud experience, forward-thinking organizations recognize the strategic value of building internal, controlled AI infrastructure—an "Enterprise GPT." Such an approach aligns AI capabilities with specific business needs, integrates seamlessly with proprietary data, and prioritizes governance and compliance.
The Foundation: Data as the Engine of AI
Building such a controlled infrastructure starts with mastering the fundamental fuel of AI: data. Modern AI, particularly Large Language Models (LLMs), relies on massive datasets to learn intricate patterns. The challenge lies in harnessing the petabytes of data generated daily—much of it unstructured and underutilized. Generative AI provides powerful tools to extract insights, elevating data's strategic value. This underscores the critical importance of diverse, high-quality data sources for training robust and differentiated AI models.
Preparing the Fuel: Data Processing and Labeling
As my friend Aaron Fulkerson, CEO of Opaque, and my co-host for the AI Confidential Podcast notes, “Data in the age of AI isn’t an advantage—your data will be your only advantage.” Protecting data sovereignty and keeping it private for your use is a core need in AI implementations. But before you can do that, you need to make your data usable.
Raw data requires meticulous preparation before it can power AI models. This involves cleaning, structuring, and contextualization, often through data labeling, which provides the necessary ground truth for effective learning. Effective data utilization hinges on robust processing and labeling workflows, supported by various tools:
Tools for Processing and Labeling Data:
Tool Name | Type | Key Functions | Notable Features / Use Cases |
---|---|---|---|
Open Source | Data labeling for text, audio, images, video | Extensible with Python SDK; integrates with ML pipelines | |
Open Source | Programmatic data labeling, weak supervision | Label data without manual effort using weak supervision | |
Proprietary | Data labeling, QA, embedding exploration | AI-augmented human-in-the-loop annotation environment | |
Proprietary | Scalable data labeling with active learning | Combines machine assistance and human review | |
Proprietary | Data prep, visual pipelines, auto-cleaning | Supports full ML lifecycle, collaboration |
Storing the Data: The Role of Vector Databases
Traditional databases struggle with the high-dimensional data representations central to AI. Storing and querying the resulting vector embeddings requires specialized databases. Vector databases are designed to index and retrieve these vectors, which capture semantic meaning, enabling essential AI tasks like similarity search, recommendation, and retrieval-augmented generation (RAG) across unstructured data.
Vector Database Options:
Tool Name | Type | Key Functions | Notable Features / Use Cases |
---|---|---|---|
Open Source | Vector similarity search, hybrid queries | Highly scalable, supports billion-scale vector indexing | |
Open Source | In-memory vector DB, embeddings management | Lightweight, integrates easily with LangChain and LLM pipelines | |
Proprietary | Fully managed vector database | Real-time indexing and filtering with high availability | |
Proprietary | Vector search integration with document store | Combines full-text, metadata, and vector search in one platform |
Learning Paradigms: ML, Deep Learning, and Model Training
Understanding the different approaches to machine learning is key to selecting and applying the right techniques:
Machine Learning (ML): The foundational field where algorithms learn from data to make predictions or decisions.
Deep Learning: A subset of ML using multi-layered neural networks to model complex patterns effectively.
Learning Approaches:
Supervised Learning: Training models on labeled datasets (input-output pairs).
Unsupervised Learning: Discovering hidden patterns in unlabeled data.
Reinforcement Learning: Training models through trial-and-error based on environmental feedback (rewards/penalties).
Generative AI: Creating New Possibilities
Generative AI models represent a significant leap, capable of creating novel content. You’ll need to understand the core vocabulary to have meaningful conversations. Here are key operational concepts to understand (I highly recommend you ask your favorite model to help you get a more thorough understanding, or review the presentation I referenced):
Training: Optimizing model parameters using data, often involving broad pretraining followed by specific fine-tuning.
Distillation: Creating smaller, efficient models that mimic larger ones.
Inference: Deploying trained models to generate outputs from new inputs.
Tokens & Context Window: Understanding how models process information (tokens) and their capacity (context window) is crucial for application design.
Feedback Loops: Implementing mechanisms for continuous model improvement based on performance and user interaction.
The Strategy: Open Source AI and Infrastructure Control
While proprietary models offer ease of access, the open source AI ecosystem provides compelling advantages for enterprises prioritizing control, customization, and long-term strategy. Concerns regarding data privacy, model transparency, and cost predictability drive significant interest in open source alternatives and frameworks (PyTorch, TensorFlow, Hugging Face).
Notable Open Source & Enterprise LLMs for Consideration:
Model Family | Developer/Origin | Key Characteristics / Focus Areas |
---|---|---|
Meta | Various sizes (e.g., 10B, 80B, 500B), next-gen performance | |
Mistral AI | High efficiency, strong performance (e.g., 7B, Mixtral MoE) | |
DeepSeek AI | Strong coding and reasoning, innovative architectures | |
IBM | Enterprise-focused, code & language models, trustworthy AI |
These models, combined with the broader ecosystem, form the building blocks for bespoke Enterprise GPT solutions. Tools like Ollama further simplify running many of these open source models locally for development and experimentation. Emerging AI Agent frameworks also enhance the ability to orchestrate complex tasks using these components.
Here’s a discussion of the models from our AIE Network AI CIO author, John M. Willis on IBM’s Mixture of Experts podcast. He provides excellent insight into how these models differ in focus and application.
AI Agent Frameworks for Orchestration:
Framework | Key Focus / Characteristic |
---|---|
Role-playing autonomous agents | |
Multi-agent conversation framework | |
Building agentic workflows as graphs | |
Platform for customizable AI assistants/agents | |
Autonomous agent platform |
Bringing these elements together, a conceptual architecture for an "Enterprise GPT Stack" might resemble the following configuration:
User Interface: Chatbots (like ChatGPT, internal tools, application integrations).
Orchestration: Frameworks like LangChain, LlamaIndex, or agent frameworks (CrewAI, AutoGen) to manage workflows, prompt engineering, and tool usage.
Models: Selection of foundation models (proprietary APIs or hosted open source like those above), potentially fine-tuned on enterprise data. Includes embedding models for RAG.
Vector Database: Chosen from options like Milvus, Weaviate, Qdrant, Chroma, etc., for storing and retrieving relevant data embeddings.
Data Sources: Internal knowledge bases, documents, databases, and real-time data feeds.
Infrastructure: Cloud platforms (AWS, Azure, GCP) or on-premises hardware, potentially utilizing container orchestration (Kubernetes) and ML Ops tooling.
Ultimately, controlling and deeply understanding the AI infrastructure—from data pipelines to model deployment—is not merely operational but a strategic imperative. It empowers enterprises to build unique, defensible competitive advantages by tailoring AI to proprietary data and core processes.
This control fosters agility, enabling faster innovation cycles than reliance on external providers allows. While hybrid approaches leveraging both proprietary and open source elements may be practical, building internal capabilities and maintaining infrastructure control facilitates optimized costs, enhances security, ensures compliance, and enables the creation of valuable intellectual property.
Strategically leveraging open source tools alongside robust data governance allows enterprises to construct powerful, customized AI capabilities, securing greater control over their technological future and competitive positioning.


LiteLLM: A lightweight proxy that enables routing, observability, caching, and multi-model support for LLMs, facilitating efficient and secure AI operations.
Kong AI Gateway: An enterprise-grade solution for managing, securing, and governing AI traffic, offering robust routing and access control features.
Chatbot UI: An open-source, extensible interface for deploying ChatGPT-like experiences, customizable to fit various enterprise needs.
LlamaIndex: A framework designed to facilitate retrieval-augmented generation by connecting LLMs with external data sources effectively.
Milvus: An open-source vector database optimized for scalable similarity search, essential for handling large-scale AI applications.

Creating Complex Documents with Google Gemini
I rarely use a word processor to write a document any longer. I typically use ChatGPT to create an outline for my documents. Especially my newsletters. But I have the luxury to try new tools and move back and forth a lot.
As I mentioned above, I gave a talk on building an Open Source GPT this week. I recorded the talk but the slides didn’t come through on the video. So I downloaded the transcript and uploaded it to Google Gemini.
Then I uploaded the slides from the talk. I used the following Gemini prompt:
Turn this talk transcript into an article. Use these slides to help provide context to the talk and help me write this edition of my Artificially Intelligent Enterprise newsletter. Use this URL as an example of the style: https://theaienterprise.io
What happened next was that I got a draft and then used Gemini’s Canvas option to spawn an editor where I could write with my hand, or chat with the Gemini model to improve the writing. Here’s what that looked like:

Google Gemini with a Gemini
Then I used a variety of other prompts to improve the rough draft.
Add hyperlinks to all the projects in the table.
Gemini couldn’t fit the whole 1 hour talk into the article but I wanted to add some information it had left out. Like Ollama, so I asked Gemini to insert Ollama.
Find an appropriate place to mention Ollama.
Then I used this simple prompt to take one last AI crack at the bat.
Critique the article and improve it while keeping my tone.
Finally, I edited the article in Canvas and cut and pasted it into my Beehiiv newsletter editor for final tweaks. The edits weren’t minor—but the process was at least five times faster than starting from scratch. Plus it leveraged work I already did from the talk.
I hope you find ways to use Gemini to create your next document. The new Gemini 2.5 Pro model is well worth using.

I appreciate your support.
![]() | Your AI Sherpa, Mark R. Hinkle |
Reply