{"id":21299,"date":"2026-04-17T10:34:31","date_gmt":"2026-04-17T10:34:31","guid":{"rendered":"https:\/\/ideainthebox.com\/index.php\/2026\/04\/17\/how-robots-learn-brief-contemporary-history\/"},"modified":"2026-04-17T10:34:31","modified_gmt":"2026-04-17T10:34:31","slug":"how-robots-learn-brief-contemporary-history","status":"publish","type":"post","link":"https:\/\/ideainthebox.com\/index.php\/2026\/04\/17\/how-robots-learn-brief-contemporary-history\/","title":{"rendered":"How robots learn: A brief, contemporary history"},"content":{"rendered":"<div>\n<p>Roboticists used to dream big but build small. They\u2019d hope to match or exceed the extraordinary complexity of the human body, and then they\u2019d spend their career refining robotic arms for auto plants. Aim for C-3P0; end up with the Roomba.\u00a0<\/p>\n<p>The real ambition for many of these researchers was the robot of science fiction\u2014one that could move through the world, adapt to different environments, and interact safely and helpfully with people. For the socially minded, such a machine could help those with mobility issues, ease loneliness, or do work too dangerous for humans. For the more financially inclined, it would mean a bottomless source of wage-free labor. Either way, a long history of failure left most of Silicon Valley hesitant to bet on helpful robots.<\/p>\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<p>That has changed. The machines are yet unbuilt, but the money is flowing: Companies and investors put $6.1 billion into humanoid robots in 2025 alone, four times what was invested in 2024.\u00a0<\/p>\n<p>What happened? A revolution in how machines have learned to interact with the world.\u00a0<\/p>\n<\/div>\n<p>Imagine you\u2019d like a pair of robot arms installed in your home purely to do one thing: fold clothes. How would it learn to do that? You could start by writing rules. Check the fabric to figure out how much deformation it can tolerate before tearing. Identify a shirt\u2019s collar. Move the gripper to the left sleeve, lift it, and fold it inward by exactly this distance. Repeat for the right sleeve. If the shirt is rotated, turn the plan accordingly. If the sleeve is twisted, correct it. Very quickly the number of rules explodes, but a complete accounting of them could produce reliable results. This was the original craft of robotics: anticipating every possibility and encoding it in advance.<\/p>\n<p>Around 2015, the cutting edge started to do things differently: Build a digital simulation of the robotic arms and the clothes, and give the program a reward signal every time it folds successfully and a ding every time it fails. This way, it gets better by trying all sorts of techniques through trial and error, with millions of iterations\u2014the same way AI got good at playing <a href=\"https:\/\/www.technologyreview.com\/2026\/02\/27\/1133624\/ai-is-rewiring-how-the-worlds-best-go-players-think\/\">games<\/a>.<\/p>\n<p>The arrival of ChatGPT in 2022 catalyzed the current boom. Trained on vast amounts of text, large language models work not through trial and error but by learning to predict what word should come next in a sentence. Similar models adapted to robotics were soon able to absorb pictures, sensor readings, and the position of a robot\u2019s joints and predict the next action the machine should take, issuing dozens of motor commands every second.<\/p>\n<p>This conceptual shift\u2014to reliance on AI models that ingest large amounts of data\u2014seems to work whether that helpful robot is supposed to talk to people, move through an environment, or even do complicated tasks. And it was paired with other ideas about how to accomplish this new way of learning, like deploying robots even if they aren\u2019t yet perfect so they can learn from the environment they\u2019re meant to work in. Today, Silicon Valley roboticists are dreaming big again. Here\u2019s how that happened.\u00a0<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n<h4 class=\"wp-block-heading has-vivid-red-color has-text-color has-link-color wp-elements-028f6cce0984b18f743fd057c4943fbc\">Jibo<\/h4>\n<h3 class=\"wp-block-heading\">Jibo<\/h3>\n<p><em>A movable social robot carried out conversations long before the age of LLMs.<\/em><\/p>\n<p>An MIT robotics researcher named Cynthia Breazeal introduced an armless, legless, faceless robot called Jibo to the world in 2014. It looked, in fact, like a lamp. Breazeal\u2019s aim was to create a social robot for families, and the idea pulled in $3.7 million in a crowdsourced funding campaign. Early preorders cost $749.<\/p>\n<p>The early Jibo could introduce itself and dance to entertain kids, but that was about it. The vision was always for it to become a sort of embodied assistant that could handle everything from scheduling and emails to telling stories. It earned a number of devoted users, but ultimately the company shut down in 2019.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" height=\"2000\" width=\"1604\" src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/jibo.jpg?w=1604\" data-orig-src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/jibo.jpg?w=1604\" alt='A robot with a shape vaguely like a lowercase letter \"i\"' class=\"lazyload wp-image-1135760\" srcset=\"data:image\/svg+xml,%3Csvg%20xmlns%3D%27http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%27%20width%3D%271604%27%20height%3D%272000%27%20viewBox%3D%270%200%201604%202000%27%3E%3Crect%20width%3D%271604%27%20height%3D%272000%27%20fill-opacity%3D%220%22%2F%3E%3C%2Fsvg%3E\" data-srcset=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/jibo.jpg 2406w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/jibo.jpg?resize=241,300 241w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/jibo.jpg?resize=768,958 768w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/jibo.jpg?resize=1604,2000 1604w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/jibo.jpg?resize=1232,1536 1232w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/jibo.jpg?resize=1642,2048 1642w\" data-sizes=\"auto\" data-orig-sizes=\"(max-width: 1604px) 100vw, 1604px\"><figcaption class=\"wp-element-caption\">A crowdfunding campaign started in 2014 and drew 4,800 Jibo preorders.<\/figcaption><div class=\"image-credit\">COURTESY OF MIT MEDIA LAB<\/div>\n<\/figure>\n<\/div>\n<p>In retrospect, one thing that Jibo really needed was better language capabilities. It was competing against Apple\u2019s Siri and Amazon\u2019s Alexa, and all those technologies at the time relied on heavy scripting. In broad terms, when you spoke to them, software would translate your speech into text, analyze what you wanted, and create a response pulled from preapproved snippets. Those snippets could be charming, but they were also repetitive and simply boring<em>\u2014<\/em>downright robotic. That was especially a challenge for a robot that was supposed to be social and family oriented.\u00a0<\/p>\n<p>What has happened since, of course, is a revolution in how machines can generate language. Voice mode from any leading AI provider is now engaging and impressive, and multiple hardware startups are trying (and failing) to build products that take advantage of it.\u00a0<\/p>\n<p>But that comes with a new risk: While scripted conversations can\u2019t really go off the rails, ones generated by AI certainly can. Some popular AI toys have, for example, talked to kids about how to find matches and knives.\u00a0<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n<h4 class=\"wp-block-heading has-vivid-red-color has-text-color has-link-color wp-elements-f7833379ef6bc3238398fe8b1a28f6f0\">OpenAI<\/h4>\n<h3 class=\"wp-block-heading\">Dactyl<\/h3>\n<p><em>A robot hand trained with simulations tries to model the unpredictability and variation of the real world.<\/em><\/p>\n<p>By 2018, every leading robotics lab was trying to scrap the old scripted rules and train robots through trial and error. OpenAI tried to train its robotic hand, Dactyl, virtually<em>\u2014<\/em>with digital models of the hand and of the palm-size cubes Dactyl was supposed to manipulate. The cubes had letters and numbers on their faces; the model might set a task like \u201cRotate the cube so the red side with the letter O faces upward.\u201d<\/p>\n<p>Here\u2019s the problem: A robotic hand might get really good at doing this in its simulated world, but when you take that program and ask it to work on a real version in the real world, the slight differences between the two can cause things to go awry. Colors might be slightly different, or the deformable rubber in the robot\u2019s fingertips could turn out to be stretchier than it was in simulation.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1616\" height=\"1080\" src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/solving-rubiks-cube.jpg?w=1616\" data-orig-src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/solving-rubiks-cube.jpg?w=1616\" alt=\"a Dactyl robot hand holds a Rubix cube\" class=\"lazyload wp-image-1135762\" srcset=\"data:image\/svg+xml,%3Csvg%20xmlns%3D%27http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%27%20width%3D%271616%27%20height%3D%271080%27%20viewBox%3D%270%200%201616%201080%27%3E%3Crect%20width%3D%271616%27%20height%3D%271080%27%20fill-opacity%3D%220%22%2F%3E%3C%2Fsvg%3E\" data-srcset=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/solving-rubiks-cube.jpg 1616w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/solving-rubiks-cube.jpg?resize=300,200 300w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/solving-rubiks-cube.jpg?resize=768,513 768w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/solving-rubiks-cube.jpg?resize=1536,1027 1536w\" data-sizes=\"auto\" data-orig-sizes=\"(max-width: 1616px) 100vw, 1616px\"><figcaption class=\"wp-element-caption\">Dactyl, part of OpenAI\u2019s first attempt at robotics, was trained in simulation to solve Rubik\u2019s Cubes.<\/figcaption><div class=\"image-credit\">COURTESY OF OPENAI<\/div>\n<\/figure>\n<\/div>\n<p>The solution is called domain randomization. You essentially create millions of simulated worlds that all vary slightly and randomly from one another. In each one the friction might be less, or the lighting more harsh, or the colors darkened. Exposure to enough of this variation means the robots will be better able to manipulate the cube in the real world. The approach worked on Dactyl, and one year later it was able to use the same core techniques to do something harder: solving Rubik\u2019s Cubes (though it worked only 60% of the time, and just 20% when the scrambles were particularly hard).\u00a0<\/p>\n<p>Still, the limits of simulation mean that this technique plays a far smaller role today than it did in 2018. OpenAI shuttered its robotics effort in 2021 but has recently started the division up again<em>\u2014<\/em>reportedly focusing on humanoids.\u00a0<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n<h4 class=\"wp-block-heading has-vivid-red-color has-text-color has-link-color wp-elements-8c83d52060ef321cba6d434c3bf149be\">Google DeepMind<\/h4>\n<h3 class=\"wp-block-heading\">RT-2<\/h3>\n<p><em>Training on images from across the internet helps robots translate language into action.<\/em><\/p>\n<p>Around 2022, Google\u2019s robotics team was up to some strange things. It spent 17 months handing people robot controllers and filming them doing everything from picking up bags of chips to opening jars. The team ended up cataloguing 700 different tasks.<\/p>\n<p>The point was to build and test one of the first large-scale foundation models for robotics. As with large language models, the idea was to input lots of text, tokenize it into a format an algorithm could work with, and then generate an output. Google\u2019s RT-1 received input about what the robot was looking at and how the many parts of the robotic arm were positioned; then it took an instruction and translated it into motor commands to move the robot. When it had seen tasks before, it carried out 97% of them successfully; it succeeded at 76% of the instructions it hadn\u2019t seen before.\u00a0<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" loading=\"lazy\" decoding=\"async\" width=\"2160\" height=\"1620\" src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/deep-mind_c898ae.jpg?w=2160\" data-orig-src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/deep-mind_c898ae.jpg?w=2160\" alt=\"a robot at a table of small toys\" class=\"lazyload wp-image-1135764\" srcset=\"data:image\/svg+xml,%3Csvg%20xmlns%3D%27http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%27%20width%3D%272160%27%20height%3D%271620%27%20viewBox%3D%270%200%202160%201620%27%3E%3Crect%20width%3D%272160%27%20height%3D%271620%27%20fill-opacity%3D%220%22%2F%3E%3C%2Fsvg%3E\" data-srcset=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/deep-mind_c898ae.jpg 2160w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/deep-mind_c898ae.jpg?resize=300,225 300w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/deep-mind_c898ae.jpg?resize=768,576 768w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/deep-mind_c898ae.jpg?resize=1536,1152 1536w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/deep-mind_c898ae.jpg?resize=2048,1536 2048w\" data-sizes=\"auto\" data-orig-sizes=\"auto, (max-width: 2160px) 100vw, 2160px\"><figcaption class=\"wp-element-caption\">The model RT-2, for Robotic Transformer 2, incorporated internet data to help robots process what they were seeing.<\/figcaption><div class=\"image-credit\">COURTESY OF GOOGLE DEEPMIND<\/div>\n<\/figure>\n<\/div>\n<p>The second iteration, RT-2, came out the following year and went even further. Instead of training on data specific to robotics, it went broad: It trained on more general images from across the internet, like the vision-language models lots of researchers were working on at the time. That allowed the robot to interpret where certain objects were in the scene.<\/p>\n<p>\u201cAll these other things were unlocked,\u201d says Kanishka Rao, a roboticist at Google DeepMind who led work on both iterations. \u201cWe could do things now like \u2018Put the Coke can near the picture of Taylor Swift.\u2019\u201d\u00a0<\/p>\n<p>In 2025, Google DeepMind further fused the worlds of large language models and robotics, releasing a Gemini Robotics model with improved ability to understand commands in natural language.\u00a0<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n<h4 class=\"wp-block-heading has-vivid-red-color has-text-color has-link-color wp-elements-22e9d3143337a4b524f1dc069ffe63ca\">Covariant<\/h4>\n<h3 class=\"wp-block-heading\">RFM-1<\/h3>\n<p><em>An AI model that allows robotic arms to act like coworkers.<\/em><\/p>\n<p>In 2017, before OpenAI shuttered its first robotics team, a group of its engineers spun out a project called Covariant, aiming to build not sci-fi humanoids but the most pragmatic of all robots: an arm that could pick up and move things in warehouses. After building a system based on foundation models similar to Google\u2019s, Covariant deployed this platform in warehouses like those operated by Crate &amp; Barrel and treated it as a data collection pipeline.\u00a0<\/p>\n<p>By 2024, Covariant had released a robotics model, RFM-1, that you could interact with like a coworker. If you showed an arm many sleeves of tennis balls, for example, you could then instruct it to move each sleeve to a separate area. And the robot could respond<em>\u2014<\/em>perhaps predicting that it wouldn\u2019t be able to get a good grip on the item and then asking for advice on which particular suction cups it should use.\u00a0<\/p>\n<p>This sort of thing had been done in experiments, but Covariant was launching it at significant scale. The company now had cameras and data collection machines in every customer location, feeding back even more data for the model to train on.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" loading=\"lazy\" decoding=\"async\" width=\"1990\" height=\"1493\" src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Induction_Apparel_OttoGroup_B-Business.jpg?w=1990\" data-orig-src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Induction_Apparel_OttoGroup_B-Business.jpg?w=1990\" alt=\"a warehouse robot arm lifts object with many suckers to place in a bin\" class=\"lazyload wp-image-1135759\" srcset=\"data:image\/svg+xml,%3Csvg%20xmlns%3D%27http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%27%20width%3D%271990%27%20height%3D%271493%27%20viewBox%3D%270%200%201990%201493%27%3E%3Crect%20width%3D%271990%27%20height%3D%271493%27%20fill-opacity%3D%220%22%2F%3E%3C%2Fsvg%3E\" data-srcset=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Induction_Apparel_OttoGroup_B-Business.jpg 1990w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Induction_Apparel_OttoGroup_B-Business.jpg?resize=300,225 300w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Induction_Apparel_OttoGroup_B-Business.jpg?resize=768,576 768w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Induction_Apparel_OttoGroup_B-Business.jpg?resize=1536,1152 1536w\" data-sizes=\"auto\" data-orig-sizes=\"auto, (max-width: 1990px) 100vw, 1990px\"><figcaption class=\"wp-element-caption\">A Covariant robot demonstrates \u201cinduction\u201d\u2014the common warehouse task of placing objects on sorters or conveyors.<\/figcaption><div class=\"image-credit\">COURTESY OF COVARIANT<\/div>\n<\/figure>\n<\/div>\n<p>It wasn\u2019t perfect. In a demo in March 2024 with an array of kitchen items, the robot struggled when it was asked to \u201creturn the banana\u201d to its original location. It picked up a sponge, then an apple, then a host of other items before it finally accomplished the task.\u00a0<\/p>\n<p>It \u201cdoesn\u2019t understand the new concept\u201d of retracing its steps, cofounder Peter Chen told me at the time. \u201cBut it\u2019s a good example<em>\u2014<\/em>it might not work well yet in the places where you don\u2019t have good training data.\u201d<\/p>\n<p>Chen and fellow founder Pieter Abbeel were soon hired by Amazon, which is currently licensing Covariant\u2019s robotics model (Amazon did not respond to questions about how it\u2019s being used, but the company runs an estimated 1,300 warehouses in the US alone).\u00a0<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n<h4 class=\"wp-block-heading has-vivid-red-color has-text-color has-link-color wp-elements-7f4dd0973421dd2eab7c89e497461b22\">Agility Robotics<\/h4>\n<h3 class=\"wp-block-heading\">Digit<\/h3>\n<p><em>Companies are putting this humanoid to the test in real-world settings.<\/em><\/p>\n<p>The new investment dollars flowing to robotics startups are aimed largely at robots shaped not like lamps or arms but like people. Humanoid robots are supposed to be able to seamlessly enter the spaces and jobs where humans currently work, avoiding the need to retool assembly lines to accommodate new shapes such as giant arms.\u00a0<\/p>\n<p>It\u2019s easier said than done. In the rare cases where humanoids appear in real warehouses, they\u2019re often confined to test zones and pilot programs.\u00a0<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" loading=\"lazy\" decoding=\"async\" width=\"917\" height=\"844\" src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Digit-GXO-RaaS-3-1.jpg?w=917\" data-orig-src=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Digit-GXO-RaaS-3-1.jpg?w=917\" alt=\"Digit humanoid robot putting a plastic bin on a conveyor belt\" class=\"lazyload wp-image-1135765\" srcset=\"data:image\/svg+xml,%3Csvg%20xmlns%3D%27http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%27%20width%3D%27917%27%20height%3D%27844%27%20viewBox%3D%270%200%20917%20844%27%3E%3Crect%20width%3D%27917%27%20height%3D%27844%27%20fill-opacity%3D%220%22%2F%3E%3C%2Fsvg%3E\" data-srcset=\"https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Digit-GXO-RaaS-3-1.jpg 917w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Digit-GXO-RaaS-3-1.jpg?resize=300,276 300w, https:\/\/wp.technologyreview.com\/wp-content\/uploads\/2026\/04\/Digit-GXO-RaaS-3-1.jpg?resize=768,707 768w\" data-sizes=\"auto\" data-orig-sizes=\"auto, (max-width: 917px) 100vw, 917px\"><figcaption class=\"wp-element-caption\">Amazon and other companies are using Digit to help move shipping totes.<\/figcaption><div class=\"image-credit\">COURTESY OF AGILITY ROBOTICS<\/div>\n<\/figure>\n<\/div>\n<p>That said, Agility\u2019s humanoid Digit appears to be doing some real work. The design<em>\u2014<\/em>with exposed joints and a distinctly unhuman head<em>\u2014<\/em>is driven more by function than by sci-fi aesthetics. Amazon, Toyota, and GXO (a logistics giant with customers like Apple and Nike) have all deployed it<em>\u2014<\/em>making it one of the first examples of a humanoid robot that companies see as providing actual cost savings rather than novelty. Their Digits spend their days picking up, moving, and stacking shipping totes.<\/p>\n<p>The current Digit is still a long way from the humanlike helper Silicon Valley is betting on, though. It can lift only 35 pounds, for example<em>\u2014<\/em>and every time Agility makes Digit stronger, its battery gets heavier and it has to recharge more often. And standards organizations say humanoids need stricter safety rules than most industrial robots, because they\u2019re designed to be mobile and spend time in proximity to people.\u00a0<\/p>\n<p>But Digit shows that this revolution in robot training isn\u2019t converging on a single method. Agility relies on simulation techniques like those OpenAI used to train its hand, and the company has worked with Google\u2019s Gemini models to help its robots adapt to new environments. That\u2019s where more than a decade of experiments have gotten the industry: Now it\u2019s building big.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Roboticists used to dream big but build small. They\u2019d hope  [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[226],"tags":[],"class_list":["post-21299","post","type-post","status-publish","format-standard","hentry","category-technology"],"acf":[],"_links":{"self":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/21299","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/comments?post=21299"}],"version-history":[{"count":0,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/21299\/revisions"}],"wp:attachment":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/media?parent=21299"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/categories?post=21299"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/tags?post=21299"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}