{"id":19618,"date":"2026-03-16T13:31:23","date_gmt":"2026-03-16T13:31:23","guid":{"rendered":"https:\/\/ideainthebox.com\/index.php\/2026\/03\/16\/nurturing-agentic-ai-beyond-the-toddler-stage\/"},"modified":"2026-03-16T13:31:23","modified_gmt":"2026-03-16T13:31:23","slug":"nurturing-agentic-ai-beyond-the-toddler-stage","status":"publish","type":"post","link":"https:\/\/ideainthebox.com\/index.php\/2026\/03\/16\/nurturing-agentic-ai-beyond-the-toddler-stage\/","title":{"rendered":"Nurturing agentic AI beyond the toddler stage"},"content":{"rendered":"<div>\n<p>Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child\u2019s first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach.<\/p>\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" fetchpriority=\"high\" decoding=\"async\" width=\"1254\" height=\"836\" src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/03\/MITTRI-Intel-iStock-2241096601.jpg?w=1254\" data-orig-src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/03\/MITTRI-Intel-iStock-2241096601.jpg?w=1254\" alt=\"\" class=\"lazyload wp-image-1134002\" srcset=\"data:image\/svg+xml,%3Csvg%20xmlns%3D%27http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%27%20width%3D%271254%27%20height%3D%27836%27%20viewBox%3D%270%200%201254%20836%27%3E%3Crect%20width%3D%271254%27%20height%3D%27836%27%20fill-opacity%3D%220%22%2F%3E%3C%2Fsvg%3E\" data-srcset=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/03\/MITTRI-Intel-iStock-2241096601.jpg 1254w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/03\/MITTRI-Intel-iStock-2241096601.jpg?resize=300,200 300w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/03\/MITTRI-Intel-iStock-2241096601.jpg?resize=768,512 768w\" data-sizes=\"auto\" data-orig-sizes=\"(max-width: 1254px) 100vw, 1254px\"><\/figure>\n<p>Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub. No more crawling on the carpet\u2014the generative AI tech baby broke into a sprint, and very few governance principles were operationally prepared.<\/p>\n<h3 class=\"wp-block-heading\"><strong>The accountability challenge: It\u2019s not them, it\u2019s you<\/strong><\/h3>\n<p>Until now, governance has been focused on model output risks with humans in the loop before consequential decisions were made\u2014such as with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the focus. The pace was set by a human prompting a model in a chatbot format with plenty of back and forth interactions between machine and human.<\/p>\n<p>Today, with autonomous agents operating in complex workflows, the vision and the benefits of applied AI require significantly fewer humans in the loop. The point is to operate a business at machine pace by automating manual tasks that have clear architecture and decision rules. The goal, from a liability standpoint, is no reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. <a href=\"https:\/\/www.cxtoday.com\/ai-automation-in-cx\/ai-agents-humans-lloyds-mastercard-and-others-lead-agentic-ai-adoption\/\">CX Today summarizes<\/a> the situation succinctly: \u201cAI does the work, humans own the risk,\u201d and \u00a0\u00a0California state law (AB 316), went into effect January 1, 2026, which removes the \u201cAI did it; I didn\u2019t approve it\u201d excuse.\u00a0 This is similar to parenting when an adult is held responsible for a child\u2019s actions that negatively impacts the larger community.<\/p>\n<p>The challenge is that without building in code that enforces operational governance aligned to different levels of risk and liability along the entire workflow, the benefit of autonomous AI agents is negated. In the past, governance had been static and aligned to the pace of interaction typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance. \u00a0<\/p>\n<h3 class=\"wp-block-heading\"><strong>Considering permissions<\/strong><\/h3>\n<p>Much like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that can change critical enterprise data carries significant risks.\u00a0 For instance, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user would be granted. To move forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the start. \u00a0<\/p>\n<p>A humorous meme around the behavior of toddlers with toys starts with all the reasons that whatever toy you have is mine and ends with a broken toy that is definitely yours.\u00a0 For example, OpenClaw delivered a user experience closer to working with a human assistant;, but the excitement shifted as <a href=\"https:\/\/www.zdnet.com\/article\/openclaw-moltbot-clawdbot-5-reasons-viral-ai-agent-security-nightmare\/\">security experts<\/a> realized inexperienced users could be easily compromised by using it.<\/p>\n<p>For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must take over and clean up assets they did not architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it\u2019s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Having a retirement plan<\/strong><\/h3>\n<p>Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a \u201czombie project\u201d \u2014a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI\u2014or else\u2014and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as a employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Financial optimization is governance out of the gate<\/strong><\/h3>\n<p>While for some executives, autonomous AI sounds like a way to improve their operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 <a href=\"https:\/\/www.datarobot.com\/resources\/a-strategic-approach-to-scaling-generative-and-agentic-ai\/thank-you\">IDC survey<\/a> sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected.<\/p>\n<p>The survey separates the concepts of governance and ROI, but as AI systems scale across large enterprises, financial and liability governance should be architected into the workflows from the beginning. Part of enterprise class governance stems from predicting and adhering to allocated budgeting. Unlike the software financial models of per-seat costs with support and maintenance fees, use of AI is consumption and usage costs scale as the workflow scales across the enterprise: the more users, the more tokens or the more compute time, and the higher the bill. Think of it as a tab left open, or an online retailer\u2019s digital shopping cart button unlocked on a toddler\u2019s electronic game device.<\/p>\n<p>Cloud FinOps was deterministic, but generative AI and agentic AI systems built on generative AI are probabilistic. <a href=\"\">Some<\/a> AI-first <a href=\"https:\/\/www.ainvest.com\/news\/ai-agent-economics-100k-year-cost-barrier-2602\/\">founders<\/a> are realizing that a single agents\u2019 token costs can be as high as $100,000 per session. Without guardrails built in from the start, chaining complex autonomous agents that run unsupervised for long periods of time can easily blow past the budget for hiring a junior developer.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Keeping humans in the loop remains critical<\/strong><\/h3>\n<p>The promise of autonomous agentic AI is acceleration of business operations, product introductions, customer experience, and customer retention. Shifting to machine-speed decisions without humans in and or on the loop for these key functions significantly changes the governance landscape. While many of the principles around proactive permissions, discovery, audit, remediation, and financial operations\/optimizations are the same, how they are executed has to shift to keep pace with autonomous agentic AI.<\/p>\n<p><em>This content was produced by Intel. It was not written by MIT Technology Review\u2019s editorial staff.<\/em><\/p>\n<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Parents of young children face a lot of fears about  [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[226],"tags":[],"class_list":["post-19618","post","type-post","status-publish","format-standard","hentry","category-technology"],"acf":[],"_links":{"self":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/19618","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/comments?post=19618"}],"version-history":[{"count":0,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/19618\/revisions"}],"wp:attachment":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/media?parent=19618"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/categories?post=19618"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/tags?post=19618"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}