AI & Enterprise Technology Glossary
Canonical definitions and terminology for AI agents, enterprise architectures, and compliance-aware systems.
Core Concepts
AI Agent
An AI agent is a software system that can perceive inputs, make decisions, and perform actions autonomously or semi-autonomously to achieve a defined goal using a language model or other AI components.
Model Context Protocol (MCP)
Model Context Protocol (MCP) is a protocol and set of conventions for structuring, assembling, and governing the context provided to a language model during inference, including data, tools, state, and policy constraints.
Context Window
A context window is the maximum amount of information, measured in tokens, that a language model can consider at once during a single inference step.
Tool Calling
Tool calling is the capability of an AI agent or language model to invoke external functions, APIs, or systems as part of its reasoning and execution process.
Agent Memory
Agent memory is a mechanism that allows an AI agent to store, retrieve, and use information from past interactions or executions across multiple inference steps or sessions.
Architectures
Retrieval-Augmented Generation (RAG)
Retrieval-Augmented Generation (RAG) is an approach in which a language model retrieves relevant external information at inference time and incorporates it into the context to generate responses grounded in that retrieved data.
MCP vs RAG
MCP and RAG address different aspects of how context is provided to a language model: Model Context Protocol (MCP) governs how all contextual inputs are structured and controlled, while Retrieval-Augmented Generation (RAG) focuses specifically on retrieving external information to include in that context.
Enterprise AI Architecture
Enterprise AI architecture is the structured design of components, workflows, and governance mechanisms required to deploy, operate, and scale AI systems reliably within an organization.
AI Agent Orchestration
AI agent orchestration is the coordination and management of one or more AI agents, defining how they are created, scheduled, communicate, and collaborate to achieve a shared or individual set of goals.
Governance & Compliance
Compliance-aware AI Systems
Compliance-aware AI systems are AI systems designed to operate in accordance with legal, regulatory, and organizational requirements by embedding compliance constraints directly into their architecture and runtime behavior.
GDPR-compliant AI
GDPR-compliant AI refers to AI systems designed and operated in accordance with the General Data Protection Regulation (GDPR), ensuring lawful processing of personal data throughout the AI system's lifecycle.
Auditability in AI Systems
Auditability in AI systems is the ability to inspect, trace, and verify how an AI system produced a specific output, decision, or action based on its inputs, context, and configuration.
Access Control for AI Agents
Access control for AI agents is the set of mechanisms and policies that define which data, tools, and actions an AI agent is permitted to access or execute during operation.
Practical Patterns
Multi-agent Systems
Multi-agent systems are systems composed of multiple AI agents that interact, coordinate, or collaborate to achieve shared or individual goals within a common environment.
Long-running AI Agents
Long-running AI agents are AI agents designed to operate continuously or across extended periods of time, maintaining state and progressing toward goals over multiple inference steps rather than completing tasks in a single interaction.
Human-in-the-loop AI
Human-in-the-loop AI refers to AI systems designed to incorporate human judgment, review, or intervention at defined points in the system's decision-making or execution process.
Failure Modes in AI Agents
Failure modes in AI agents are recurring patterns in which an agent produces incorrect, unsafe, inefficient, or unintended behavior due to limitations in context, reasoning, data, tooling, or system design.
Data & Infrastructure
Edge AI
Edge AI is the deployment and execution of artificial intelligence models directly on edge devices or local infrastructure, rather than relying on cloud-based processing, enabling real-time inference with minimal latency and without data leaving the premises.
On-premise AI
On-premise AI refers to the deployment of AI systems entirely within an organization's own infrastructure, where all data processing, model inference, and storage occur on locally controlled hardware rather than third-party cloud services.
Training Data Preparation
Training data preparation is the process of collecting, cleaning, transforming, and organizing raw data into a format suitable for training machine learning models, including quality assessment, normalization, and validation.
PII Removal for AI
PII removal for AI is the systematic identification and removal or anonymization of personally identifiable information from datasets used for training, fine-tuning, or evaluating machine learning models.
Data Quality for Machine Learning
Data quality for machine learning refers to the assessment and assurance that training data meets the standards of accuracy, completeness, consistency, and relevance required for a model to learn effectively and generalize correctly.
Bias Detection in AI
Bias detection in AI is the process of identifying systematic errors or unfair patterns in training data, model behavior, or system outputs that could lead to discriminatory or unrepresentative results across different groups or scenarios.
