The Research Behind the Periodic Cube of AI
Academic frameworks, industry standards, and design principles that shaped the Periodic Cube of AI framework.
Table of Contents
Why Another Framework?
The AI landscape is drowning in complexity. Every week brings new models, tools, platforms, and paradigms. Organizations struggle with fundamental questions:
- What are the building blocks of an AI capability?
- How do these pieces fit together?
- Who should own what?
- What’s mature enough for production vs. still experimental?
- Where should we invest next?
Existing frameworks address pieces of this puzzle. But none provided the multi-dimensional view needed to answer all these questions simultaneously.
The Periodic Cube of AI synthesizes established standards into a unified, visual framework. This post explains the research and standards that informed its design.
The Periodic Table Metaphor
Why Mendeleev Matters
In 1869, Dmitri Mendeleev arranged chemical elements into a table organized by atomic properties. This wasn’t just organization—it was prediction. The table revealed patterns that allowed Mendeleev to predict elements that hadn’t yet been discovered.
The periodic table succeeded because it:
- Organized complexity into a navigable structure
- Revealed relationships between seemingly unrelated elements
- Enabled prediction about properties based on position
- Evolved gracefully as new elements were discovered
These same principles guided the Periodic Cube of AI. By positioning AI components along multiple meaningful dimensions, patterns emerge. Relationships become visible. Strategic decisions become clearer.
From Table to Cube
A 2D table couldn’t capture AI’s complexity. The chemical periodic table works because elements vary primarily along two dimensions (atomic number and electron configuration). AI components vary across many more.
The solution: extend the metaphor to three spatial dimensions plus five visual dimensions, creating an 8-dimensional classification system rendered as an interactive 3D cube.
The Eight Dimensions: Sources and Rationale
Dimension 1: Functional Groups (X-Axis)
| Functional Group | Role | Analogous to |
|---|---|---|
| 1. Data & Infrastructure | Foundation layer | Database, compute, storage |
| 2. Model Development | Core AI processes | Model training, serving, management |
| 3. Tooling & Integration | Enablement | Developer tools, testing, version control |
| 4. Application & Orchestration | User-facing systems | Applications, agents, workflows |
| 5. Governance & Operations (MLOps) | Control and compliance | Security, monitoring, audit |
Dimension 2: SFIA Categories (Y-Axis)
| SFIA Category | Focus Area | Example Components |
|---|---|---|
| Strategy and Governance | Direction and control | AI System Inventory, Compliance Framework |
| People and Change | Human factors | Knowledge Elicitation, Human-in-the-Loop |
| Process and Operations | Workflows and procedures | Enterprise Workflows, Data Contracts |
| Technology | Technical capabilities | Models, Serving Runtime, Developer Tools |
| Data | Information management | Feature Store, Training Data, Data Catalog |
- Globally recognized standard used by organizations worldwide
- Provides consistent vocabulary for skill assessment and development
- Aligns AI capabilities with broader digital competency frameworks
- Enables integration with existing HR and capability planning systems
Dimension 3: Technology Readiness Level (Z-Axis)
Source: NASA Technology Readiness Level (TRL) Scale
The depth axis indicates technology maturity using an adaptation of NASA’s TRL scale. Originally developed in 1974 for space technology assessment, TRL has been adopted across industries including defense, energy, and enterprise technology.
NASA’s 9-level scale was simplified to three levels appropriate for enterprise AI:
| Cube Level | NASA TRL Equivalent | Description |
|---|---|---|
| Emerging | TRL 1-4 | Cutting-edge, high risk, limited production use |
| Maturing | TRL 5-7 | Gaining adoption, patterns stabilizing, proven in some contexts |
| Established | TRL 8-9 | Battle-tested, well-understood, reliable in production |
Why TRL?
- Proven methodology for technology risk assessment
- Provides objective criteria for maturity evaluation
- Widely understood across technical and business stakeholders
- Helps balance innovation with operational stability
The TRL dimension is dynamic—technologies move along this axis over time. What’s «Emerging» today (e.g., multi-agent orchestration) becomes «Established» as the field matures.
Dimension 4: Build/Buy/Integrate (Color)
Source: Enterprise procurement and sourcing strategy
Each component’s color indicates the typical sourcing approach:
| Strategy | Color | When to Apply |
|---|---|---|
| Build | Orange | Custom development for differentiation or unique requirements |
| Buy | Blue | Commercial solutions available and cost-effective |
| Integrate | Green | Open-source or API integration sufficient |
| Hybrid | Purple | Mix of approaches based on context |
This dimension draws from standard make-vs-buy analysis frameworks used in enterprise IT and extended for AI-specific considerations:
- Data moats: Components where proprietary data creates competitive advantage favor «Build»
- Commoditization: As capabilities become standard, they shift toward «Buy» or «Integrate»
- Speed: Time-to-market pressures favor «Buy» or «Integrate»
- Control: Regulatory or security requirements may mandate «Build»
Dimension 5: Criticality (Box Width)
Source: Business impact analysis and risk assessment frameworks
Width encodes how critical a component is to AI system success:
| Level | Impact if Unavailable |
|---|---|
| Mission-Critical | System fails completely |
| High Priority | Significant degradation |
| Enhancing | Reduced capability but functional |
This aligns with business continuity planning standards and helps prioritize investments and redundancy planning.
Dimension 6: Human Intensity (Box Height)
Source: Human-AI collaboration research and automation frameworks
Height represents the degree of human involvement required:
| Level | Human Role |
|---|---|
| Fully Automated | No human intervention in normal operation |
| Human-Supervised | Human monitors and intervenes when needed |
| Human-Collaborative | Human and AI work together interactively |
| Human-Driven | Human does primary work, AI assists |
This dimension reflects research on effective human-AI teaming and informs decisions about:
- Staffing and skill requirements
- User experience design
- Liability and accountability
- Automation ROI
Dimension 7: Organizational Ownership (Box Depth)
Source: Enterprise organizational design patterns
Depth indicates which team typically owns a component:
| Owner | Typical Responsibilities |
|---|---|
| Data/Platform Engineering | Infrastructure, data pipelines, compute |
| ML/AI Engineering | Models, training, serving, evaluation |
| Application Development | User-facing applications, integrations |
| Security/Compliance | Governance, audit, access control |
| Business/Product | Requirements, adoption, value realization |
Clear ownership is critical for AI success. Research consistently shows that unclear accountability is a primary cause of AI project failure.
Dimension 8: Cost Structure
Source: Financial management and FinOps frameworks
The eighth dimension (represented in supplementary views) categorizes spending patterns:
| Structure | Characteristics |
|---|---|
| Capital Expenditure | Large upfront investment, depreciated over time |
| Operational Expenditure | Ongoing operational costs |
| Usage-Based | Pay-per-use pricing (API calls, compute hours) |
| Mixed/Variable | Combination depending on usage patterns |
This dimension supports FinOps practices for AI cost management—increasingly important as AI spending scales.
The Learning Layer: Three Additional Dimensions
Behind each production component lies a corresponding learning layer with 60 learning components. These add three pedagogical dimensions:
Domain (What area of AI knowledge?)
| Domain | Focus |
|---|---|
| Perception & Data | Sensing, collecting, processing information |
| Processing & Models | Computation, model architectures, training |
| Interaction & UX | Human-AI interfaces and experiences |
| Knowledge & Reasoning | Storage, retrieval, inference |
| Ethics & Governance | Responsible AI, compliance, societal impact |
Scale (At what level of abstraction?)
| Scale | Examples |
|---|---|
| Nano | Transistors, basic signals, fundamental concepts |
| Micro | Features, embeddings, neural network components |
| Meso | Agents, applications, integrated systems |
| Macro | Societal impact, regulation, ecosystem effects |
Depth (How to engage with the knowledge?)
| Depth | Learning Approach |
|---|---|
| Theory | Conceptual understanding, principles |
| Build | Hands-on implementation, coding |
| Product | Operational excellence, production deployment |
These dimensions create 286 cross-references between production and learning components, enabling personalized learning paths based on job role and current capabilities.
Alignment with Regulatory Frameworks
NIST AI Risk Management Framework
The NIST AI RMF, published in 2023 and updated in 2024 with generative AI guidance, provides a voluntary framework for AI risk management structured around four functions:
| NIST Function | Periodic Cube of AI Alignment |
|---|---|
| GOVERN | Governance & Operations functional group, Strategy and Governance SFIA category |
| MAP | Data & Infrastructure, context and lineage components |
| MEASURE | Evaluation Harness, Drift Monitoring, Safety & Robustness Evals |
| MANAGE | Runtime Override, Alerts & Reports, Control Plane |
The Periodic Cube of AI components map directly to NIST AI RMF outcomes, enabling organizations to use the cube for compliance planning.
EU AI Act
The EU AI Act, which entered into force in August 2024, classifies AI systems into risk categories:
| EU AI Act Category | Periodic Cube of AI Relevance |
|---|---|
| Unacceptable Risk | Guardrails, Safety & Robustness components |
| High-Risk | Audit & Forensics, System Cards, Compliance Framework |
| Limited Risk | Response Generation & UX, Conversational Interfaces |
| Minimal Risk | Most application components |
The Criticality dimension helps organizations identify which of their AI systems may fall under high-risk classification, while Governance components support the documentation and oversight requirements.
MLOps and Platform Architecture Standards
The Periodic Cube of AI incorporates patterns from established MLOps maturity models:
Microsoft Azure MLOps Maturity Model
The Azure MLOps Maturity Model defines five levels of operational maturity. The Periodic Cube of AI’s TRL dimension aligns with this progression—components at «Established» TRL support higher maturity levels, while «Emerging» components may require additional operational investment.
Google Cloud MLOps Framework
Google’s MLOps guidance emphasizes continuous integration, delivery, and training. The Periodic Cube of AI’s Governance & Operations functional group encompasses these practices through components like:
- Model Registry (version control)
- Drift Monitoring (continuous validation)
- Deployment & Serving (continuous delivery)
- Evaluation Harness (continuous testing)
AWS MLOps Foundation
AWS’s MLOps foundation emphasizes flexibility, reproducibility, and auditability. These tenets are reflected in the Periodic Cube of AI through:
- Data/Model Lineage components (reproducibility)
- Audit & Forensics (auditability)
- Low- & No-Code Platforms (flexibility)
The 60 Production Components: Selection Criteria
The 60 production components in the Periodic Cube of AI were selected based on:
- Prevalence: Components that appear across multiple enterprise AI reference architectures
- Distinctiveness: Each component represents a functionally distinct capability
- Completeness: Together, the components cover the full AI lifecycle
- Actionability: Each component maps to concrete investments, skills, or vendor categories
The components are not exhaustive—emerging capabilities may warrant additions over time. The framework is designed to evolve, like Mendeleev’s original table.
Persona Framework: Connecting Strategy to Roles
The six persona views (CEO, CTO, CFO, CPO, CISO, COO) are grounded in stakeholder analysis frameworks:
| Persona | Primary Dimensions | Strategic Questions |
|---|---|---|
| CEO | Criticality, Build/Buy | Where do we differentiate? What’s our AI strategy? |
| CTO | TRL, Org Ownership | What technical capabilities do we need? |
| CFO | Cost Structure, Build/Buy | What’s the ROI? Where are we over/under-investing? |
| CPO | Human Intensity, TRL | How do we ship AI features faster? |
| CISO | Criticality, Org Ownership | Where are our AI risks? Are we compliant? |
| COO | Human Intensity, Cost Structure | Where can we automate? What’s the efficiency gain? |
Each persona view filters and highlights components based on relevance scores calculated from the dimensional attributes.
Design Principles
Several principles guided the framework’s design:
1. Multi-Dimensional by Design
Single-axis classifications (e.g., «AI tools» or «AI maturity») fail to capture the complexity of enterprise AI. Eight dimensions provide enough resolution for meaningful differentiation without overwhelming users.
2. Standards-Based
By building on SFIA, TRL, and NIST frameworks, the Periodic Cube of AI inherits their rigor and interoperability. Organizations already using these standards can integrate the cube into existing processes.
3. Actionable, Not Academic
Every dimension maps to real decisions: hiring (SFIA), procurement (Build/Buy), risk management (Criticality, TRL), budgeting (Cost Structure). The framework is designed for practitioners, not theorists.
4. Visual and Interactive
Complex frameworks fail when they live only in documents. The 3D visualization makes patterns visible and exploration intuitive. The complementary 2D periodic table views provide detailed reference.
5. Evolvable
The AI field changes rapidly. The framework accommodates new components, shifting TRL levels, and evolving best practices without requiring structural redesign.
Limitations and Future Research
Current Limitations
- Technology-centric: The framework emphasizes technical components; organizational and cultural factors are less well-represented
- Enterprise-focused: Startups and research organizations may find some components less relevant
- Western/Global North perspective: Regulatory alignment emphasizes EU and US frameworks; other jurisdictions may require adaptation
- Snapshot in time: TRL classifications require periodic updates as the field evolves
Future Directions
- Quantitative benchmarking: Empirical data on typical capability levels across industries
- Integration with capability maturity models: Formal mapping to established maturity frameworks
- Expanded learning layer: More granular skill definitions and learning resource recommendations
Conclusion: Standing on the Shoulders of Standards
The Periodic Cube of AI doesn’t invent new concepts. It synthesizes proven frameworks—SFIA, TRL, NIST AI RMF, MLOps architectures—into a unified visual system designed for strategic decision-making.
Just as Mendeleev’s table made chemistry navigable, the Periodic Cube of AI makes the AI landscape comprehensible. It transforms overwhelming complexity into structured insight.
The research continues. As new standards emerge and the field evolves, the framework will adapt. But the core principle remains: organize complexity to reveal insight.
References and Further Reading
Standards and Frameworks
MLOps and Platform Architecture
The Periodic Cube of AI
- Interactive 3D Visualization (full screen version)
- AI Skills Self-Assessment (map your results against the Cube)
- Introduction to the Periodic Cube of AI
- Learning Paths by Persona
