{"id":21253,"date":"2026-04-16T13:26:41","date_gmt":"2026-04-16T13:26:41","guid":{"rendered":"https:\/\/ideainthebox.com\/index.php\/2026\/04\/16\/treating-enterprise-ai-as-an-operating-layer\/"},"modified":"2026-04-16T13:26:41","modified_gmt":"2026-04-16T13:26:41","slug":"treating-enterprise-ai-as-an-operating-layer","status":"publish","type":"post","link":"https:\/\/ideainthebox.com\/index.php\/2026\/04\/16\/treating-enterprise-ai-as-an-operating-layer\/","title":{"rendered":"Treating enterprise AI as an operating layer"},"content":{"rendered":"<div>\n<p>There\u2019s a fault line running through enterprise AI, and it\u2019s not the one getting the most attention. The public conversation still tracks foundation models and benchmarks \u2014 GPT versus Gemini, reasoning scores, and marginal capability gains. But in practice, the more durable advantage is structural: who owns the operating layer where intelligence is applied, governed, and improved. One model treats AI as an on-demand utility; the other embeds it as an operating layer<sub>\u2014<\/sub>the combination of workflow software, data capture, feedback loops and governance that sits between models and real work<sub>\u2014<\/sub> that compounds with use.<\/p>\n<figure class=\"wp-block-image size-full\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1253\" height=\"836\" src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Ensemble-article-iStock-2213998103.jpg\" data-orig-src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Ensemble-article-iStock-2213998103.jpg\" alt=\"\" class=\"lazyload wp-image-1135559\" srcset=\"data:image\/svg+xml,%3Csvg%20xmlns%3D%27http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%27%20width%3D%271253%27%20height%3D%27836%27%20viewBox%3D%270%200%201253%20836%27%3E%3Crect%20width%3D%271253%27%20height%3D%27836%27%20fill-opacity%3D%220%22%2F%3E%3C%2Fsvg%3E\" data-srcset=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Ensemble-article-iStock-2213998103.jpg 1253w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Ensemble-article-iStock-2213998103.jpg?resize=300,200 300w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Ensemble-article-iStock-2213998103.jpg?resize=768,512 768w\" data-sizes=\"auto\" data-orig-sizes=\"(max-width: 1253px) 100vw, 1253px\"><\/figure>\n<p>Model providers like OpenAI and Anthropic sell intelligence as a service: you have a problem, you call an API, you get an answer. That intelligence is general-purpose, largely stateless, and only loosely connected to the day-to-day workflow where decisions are made. It\u2019s highly capable and increasingly interchangeable. The distinction that matters is whether intelligence resets on every prompt or accumulates over time.<\/p>\n<p>Incumbent organizations, by contrast, can treat AI as an operating layer: instrumentation across workflows, feedback loops from human decisions, and governance that turns individual tasks into reusable policy. In that setup, every exception, correction, and approval becomes a chance to learn\u2014and intelligence can improve as the platform absorbs more of the organization\u2019s work. The organizations most likely to shape the enterprise AI era are those that can embed intelligence directly into operational platforms and instrument those platforms so work generates usable signals.<\/p>\n<p>The prevailing narrative says nimble startups will out-innovate incumbents by building AI-native from scratch. If AI is primarily a model problem, that story holds. But in many enterprise domains, AI is a systems problem \u2014 integrations, permissions, evaluation, and change management \u2014 where advantage accrues to whomever already sits inside high-volume, high-stakes workflows and converts that position into learning and automation.<\/p>\n<h3 class=\"wp-block-heading\"><strong>The inversion: AI executes, humans adjudicate<\/strong><\/h3>\n<p>Traditional services organizations are built on a simple architecture: humans use software to do expert work. Operators log into systems, navigate workflows, make decisions, and process cases. Technology is the medium. Human judgment is the product.<\/p>\n<p>An AI-native platform inverts this. It ingests a problem, applies accumulated domain knowledge, executes autonomously what it can with high confidence, and routes targeted sub-tasks to human experts when the situation demands judgment that the system can\u2019t yet reliably provide.<\/p>\n<p>But inverting human-AI interaction isn\u2019t just a UI redesign \u2014 it requires raw material. It\u2019s only possible when the platform is built on a foundation of domain expertise, behavioral data, and operational knowledge accumulated over years.<\/p>\n<h3 class=\"wp-block-heading\"><strong>The three compounding assets incumbents already own<\/strong><\/h3>\n<p>AI-native startups begin with a clean architectural slate and can move quickly. What they can\u2019t easily manufacture is the raw material that makes domain AI defensible at scale:<\/p>\n<ul class=\"wp-block-list\">\n<li>Proprietary operational data<\/li>\n<li>A large workforce of domain experts whose day-to-day decisions generate training signals<\/li>\n<li>Accumulated tacit knowledge about how complex work actually gets done<\/li>\n<\/ul>\n<p>Services companies already have all three. But these ingredients aren\u2019t moats on their own. They become an advantage only when a company can systematically convert messy operations into AI-ready signals and institutional knowledge \u2014 then feed the results back into the workflow so the system keeps improving.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Codifying expertise into reusable signals<\/strong><\/h3>\n<p>In most services organizations, expertise is tacit and perishable. The best operators know things they cannot easily articulate: heuristics developed over the years, edge-case intuitions, and pattern recognition that operate below the level of conscious reasoning.<\/p>\n<p>At Ensemble, the strategy for addressing this challenge is knowledge distillation. The systematic conversion of expert judgment and operational decisions into machine-readable training signals.<\/p>\n<p>In health-care revenue cycle management, for example, systems can be seeded with explicit domain knowledge and then deepen their coverage through structured daily interaction with operators. In Ensemble\u2019s implementation, the system identifies gaps, formulates targeted questions, and cross-checks answers across multiple experts to capture both consensus and edge-case nuance. It then synthesizes these inputs into a living knowledge base that reflects the situational reasoning behind expert-level performance.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Turning decisions into a learning flywheel<\/strong><\/h3>\n<p>Once a system is constrained enough to be trusted, the next question is how it gets better without waiting for annual model upgrades. Every time a skilled operator makes a decision, they generate more than a completed task. They generate a potential labeled example\u2014context paired with an expert action (and sometimes an outcome). At scale, across thousands of operators and millions of decisions, that stream can power supervised learning, evaluation, and targeted forms of reinforcement\u2014teaching systems to behave more like experts in real conditions.<\/p>\n<p>For example, if an organization processes 50,000 cases a week and captures just three high-quality decision points per case, that\u2019s 150,000 labeled examples every week without creating a separate data-collection program.<\/p>\n<p>A more advanced human-in-the-loop design places experts inside the decision process, so systems learn not just what the right answer was, but how ambiguity gets resolved. Practically, humans intervene at branch points\u2014selecting from AI-generated options, correcting assumptions, and redirecting the workflow. Each intervention becomes a high-value training signal. When the platform detects an edge case or a deviation from the expected process, it can prompt for a brief, structured rationale, capturing decision factors without requiring lengthy free-form reasoning logs.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Building toward expertise amplification<\/strong><strong><\/strong><\/h3>\n<p>The goal is to permanently embed the accumulated expertise of thousands of domain experts\u2014their knowledge, decisions, and reasoning\u2014into an AI platform that amplifies what every operator can accomplish. Done well, this produces a quality of execution that neither humans nor AI achieve independently: higher consistency, improved throughput, and measurable operational gains. Operators can focus on more consequential work, supported by an AI that has already completed the analytical groundwork across thousands of analogous prior cases. <\/p>\n<p>The broader implication for enterprise leaders is straightforward. Advantages in AI won\u2019t be determined by access to general-purpose models alone. It will come from an organization\u2019s ability to capture, refine, and compound what it knows, its data, decisions, and operational judgment, while building the controls required for high-stakes environments. As AI shifts from experimentation to infrastructure, the most durable edge may belong to the companies that understand the work well enough to instrument it and can turn that understanding into systems that improve with use.<\/p>\n<p><em>This content was produced by Ensemble. It was not written by MIT Technology Review\u2019s editorial staff.<\/em><\/p>\n<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>There\u2019s a fault line running through enterprise AI, and it\u2019s  [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[226],"tags":[],"class_list":["post-21253","post","type-post","status-publish","format-standard","hentry","category-technology"],"acf":[],"_links":{"self":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/21253","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/comments?post=21253"}],"version-history":[{"count":0,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/21253\/revisions"}],"wp:attachment":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/media?parent=21253"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/categories?post=21253"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/tags?post=21253"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}