{"id":18538,"date":"2026-02-24T12:02:23","date_gmt":"2026-02-24T12:02:23","guid":{"rendered":"https:\/\/ideainthebox.com\/index.php\/2026\/02\/24\/ai-is-not-improving-productivity-nobel-laureate-daron-acemoglu\/"},"modified":"2026-02-24T12:02:23","modified_gmt":"2026-02-24T12:02:23","slug":"ai-is-not-improving-productivity-nobel-laureate-daron-acemoglu","status":"publish","type":"post","link":"https:\/\/ideainthebox.com\/index.php\/2026\/02\/24\/ai-is-not-improving-productivity-nobel-laureate-daron-acemoglu\/","title":{"rendered":"AI Is Not Improving Productivity: Nobel Laureate Daron Acemoglu"},"content":{"rendered":"<div>\n<p>In this bonus episode of the <cite>Me, Myself, and AI<\/cite> podcast, Nobel Prize-winning economist Daron Acemoglu joins host Sam Ransbotham to challenge some of the most common assumptions about artificial intelligence\u2019s future. Drawing on his book <cite>Power and Progress<\/cite>, Daron argues that technology doesn\u2019t have a fixed destiny \u2014 and that today\u2019s choices will determine whether AI boosts workers or simply accelerates automation and inequality. He makes a case for focusing on new tasks that complement human skills, rather than replacing them, and warns that current incentives push AI toward centralization and automation by default. The conversation tackles productivity myths, reliability risks, and why regulation should proactively steer AI toward social good.<\/p>\n<aside class=\"callout-info\">\n<img class=\"lazyload\" decoding=\"async\" src=\"data:image\/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==\" data-orig-src=\"https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2026\/02\/MMAI-S12-B2-Acemoglu-MIT-headshot-600.jpg\" alt=\"Daron Acemoglu\"><\/p>\n<h4>Daron Acemoglu, MIT<\/h4>\n<p>Daron Acemoglu is an institute professor at MIT, faculty codirector of the James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work, and a research affiliate at MIT\u2019s newly established Blueprint Labs. He is an elected fellow of the National Academy of Sciences, American Philosophical Society, the British Academy of Sciences, the Turkish Academy of Sciences, the American Academy of Arts and Sciences, the Econometric Society, the European Economic Association, and the Society of Labor Economists. He is also a member of the Group of Thirty. He has authored six books, including <cite>Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity<\/cite> with Simon Johnson. His work in economics has been recognized around the world, notably with the Nobel Prize in economic sciences, along with co-laureates Johnson and James A. Robinson, in 2024.<\/p>\n<\/aside>\n<p>Subscribe to <cite>Me, Myself, and AI<\/cite> on <a href=\"https:\/\/podcasts.apple.com\/us\/podcast\/me-myself-and-ai\/id1533115958\" target=\"_blank\" rel=\"noopener\">Apple Podcasts<\/a> or <a href=\"https:\/\/open.spotify.com\/show\/7ysPBcYtOPVgI6W5an6lup\" target=\"_blank\" rel=\"noopener\">Spotify<\/a>.<\/p>\n<h4>Transcript<\/h4>\n<p><strong>Allison Ryder:<\/strong> Hi, everyone. We\u2019re back with a bonus episode, profiling another thought leader in the technology research space. MIT institute professor Daron Acemoglu is a Nobel Prize-winning economist and the author of <cite>Power and Progress<\/cite>. He joins Sam today for a conversation spanning technology advancements, limitations, and regulation. We\u2019re back on March 10 with more new episodes. For now, we hope you enjoy this conversation.<\/p>\n<p><strong>Daron Acemoglu:<\/strong> I\u2019m Daron Acemoglu, institute professor at MIT, and you are listening to <cite>Me, Myself, and AI<\/cite>. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> Welcome to <cite>Me, Myself, and AI<\/cite>, a podcast from <cite>MIT Sloan Management Review<\/cite> exploring the future of artificial intelligence. I\u2019m Sam Ransbotham, professor of analytics at Boston College. I\u2019ve been researching data, analytics, and AI at <cite>MIT SMR<\/cite> since 2014, with research articles, annual industry reports, case studies, and now 12 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.<\/p>\n<p>Hi, listeners. Thanks again to everyone for joining us. I\u2019m excited to be talking with Daron Acemoglu, professor of economics at MIT. Daron works extensively on economic development, labor economics, and the economics of technology. In 2024, he was awarded the Nobel Prize in economics for this work. His insights on the interplay between institutions, technology change, and inequality are particularly relevant for businesses today. Of course, our listeners will be most interested in Daron\u2019s thoughts on AI. Daron, [it\u2019s] great to have you on the podcast.<\/p>\n<p><strong>Daron Acemoglu:<\/strong> My pleasure. Thanks, Sam. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> Your work spans institutions, technology, and equality. Can you share some of the themes in general from your past research? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> I got into economics because I was fascinated by what I saw around me in my very young teen years about very divergent economic, political, and social outcomes across countries, huge disparities in terms of wealth, in terms of poverty. Those interests have framed my research and my focus on institutional factors [that] determine the effects of history; the effects of how society is organized; the rules, the laws, the norms, and technology as the prime channel via which human ingenuity and human decisions impact economic productivity and economic well-being. <\/p>\n<p>Throughout, I have been fascinated by the interplay between institutions and technology and by how institutional factors and technological factors have evolved over time. So a lot of my research has focused on, for example, why there has been a huge divergence in economic fortunes of different parts of the world since the 16th century or thereabouts. It is very much related to, for example, the fact that European powers colonized the rest of the world and shaped the institutional trajectories of very different nations around the world in very diverse ways.<\/p>\n<p>I\u2019ve also been fascinated by the industrial revolution and how we started this process of using knowledge, science, and various skills in improving the way that we can actually start producing goods and services. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> That\u2019s all really salient for what\u2019s going on right now. You have a recent book, <cite>Power and Progress<\/cite>. I think I was reading the preface of a revised edition, where you noted that things sort of changed on you underfoot. How have the recent changes changed some of your thinking? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> I think two things are worth noting there. The main thesis of <cite>Power and Progress<\/cite> is that technology does, to some extent, what we want it to do. It does not have a preordained destiny that will take us in one direction or another. We have a lot of agency, a lot of choice in shaping the future of technology, and different futures correspond to different winners and losers, different benefits, different costs, different productivities. <\/p>\n<p>We tried to make that point by going into history, showing how critical periods during our recent history, like the last 1,000 years, have led to sometimes big technological breakthroughs but with huge losers, and sometimes those forces have been reversed, and gains from technological betterment have been shared more equitably. That message, I think, is more relevant today than ever. AI is a particularly versatile technology. It provides so many different futures for us. <\/p>\n<p>The narrative that there is a determined natural future of AI, and we are all going there whether we want it or not \u2014 and, ultimately, we\u2019re all going to become incredibly more prosperous out of that \u2014 is just simplistic. Fighting against that narrative, I think, is very important today because that narrative lulls us into a sense of helplessness and sense of complacence that could be quite costly. On the other hand, of course, in 2021, 2022, when we were writing, it was impossible to foresee how rapid some of the advances in generative AI would be. But those advances haven\u2019t really changed the basic trade-offs and the basic messages that we wanted to convey in the book. <\/p>\n<p>I talked at the high level about different directions of AI. What are they? I think, simplifying it, you have a couple of poles that are pulling in different directions. I would single out \u2014 in the production process \u2014 automation, which is the dream of most AI models today, especially under the banner of artificial general intelligence (AGI), which aims for large language models or other generative AI tools to reach levels of capabilities comparable to the best workers across a very wide range of domains. The reason why that is viewed as attractive is that just like previous rounds of software that improved cognition in different domains, that can then be used for automating tasks. So AGI is very tightly interwoven with the automation agenda. Automation is great. It gets rid of some routine tasks, some boring tasks. <\/p>\n<p>When it\u2019s applied in the physical domain, such as with cranes or robots, it could remove the most dangerous tasks from the human work schedule, but automation also doesn\u2019t benefit workers by itself. It takes away tasks from workers. It is beneficial to capital and capital owners and not so much for workers in general. <\/p>\n<p>So at the other pole, we have things that are complementary to humans, meaning that technology enables humans to do more things or better things or completely new things. These new things [are] what I refer to as new tasks. So if you look at people around you, many of the occupations you\u2019ll see involve things that could not even be imagined 50 or 60 years ago. As a journalist, you\u2019re going to be making videocasts and podcasts and [using] technologies for research that require completely different skills than somebody 60 years ago going to the library and sifting through books. Those are some aspects of new tasks. So are many of the physical occupations in manufacturing that involve much more technical work. Those have generally been very good for productivity and for worker wages and employment. <\/p>\n<p>That\u2019s one dimension in which the future of technology could have very different effects depending on whether we go [in] the automation or the new task direction. I would also like to add, whether we use technology for information centralization or decentralization is also important in that many of the early hopes about computers were centered on decentralization. People could [do things] in their garages that IBM as a centralized organization couldn\u2019t do. Personal computers enabled that to some extent, not anywhere comparable to the hopes of pioneers of computing in the \u201960s and the \u201970s. <\/p>\n<p>But today, we are going in the opposite direction. Large language models are information centralization tools. They collect all of the information. They aim to collect all of the information of humanity ultimately, and then centralize that and process that in a centralized manner that then gives you answers. So there\u2019s less for the decentralized human mind and human participation to do. <\/p>\n<p>Centralization and automation are two different poles, but they are complementary. When I\u2019m talking about new tasks, it is really about enabling the technology to go in a direction that can really help workers, help individuals, not just big corporations. So it\u2019s going back to those aspirations that were already present in the late 1960s and 1970s. My work shows how new tasks, when they have been activated, have led to productivity gains and have led to wage gains and employment gains. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> If we think about these new tasks, though, what kinds of things should businesses be looking for? If people buy this argument, and then they want to go down this path, what do they need to do? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> Actually, I think it\u2019s disarmingly simple. AI is really an information technology, a very powerful information tech. It\u2019s not an automation technology. AI is not thinking anywhere like the human brain. Instead, it has some truly impressive capabilities that the human brain doesn\u2019t have, and it lacks some of the judgmental and creativity-related capabilities that the human brain naturally has. <\/p>\n<p>As an information technology, what AI is very good at is sifting through gargantuan data sets and [finding] relevant context and information for some specific task or specific context or specific application. So if you\u2019re an electrician and you encounter equipment that is behaving in a way that you haven\u2019t seen before, or completely new equipment that you don\u2019t have experience with, and if you have the right AI tool, that can immediately and reliably give you information about why that sort of unexpected behavior is occurring, or what \u2026 things you need to know about this equipment and how it interacts with the particular type of electricity grid or the environment that it is situated in. <\/p>\n<p>Those are the kinds of things that regular electricians would have to work decades to get the experience in an imperfect way. So we can significantly improve what electricians, what nurses, what educators, what journalists, what academics could do using AI in order to perform more sophisticated tasks or new tasks and acquire much better information. I think while AI, generative AI, together with the right sort of scaffolding from good old-fashioned AI that does pattern recognition, could provide that kind of ideal tool for human new tasks, that\u2019s not the direction in which AI is being developed. In fact, none of the big companies are pouring even a small fraction of their investment into developing AI as a pro-human, pro-worker tool. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> Let\u2019s connect these last two points a little bit. These are, as you say, being developed by big companies. When I think about the electrician and your scenario, wouldn\u2019t they naturally get recommended solutions that come from, let\u2019s say, advertising models that are built into the large language model? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> Right. So right now, today, as an electrician, you can take ChatGPT with you and you can ask questions, but there are several problems with that. <\/p>\n<p>First of all, it has not been designed or optimized for that task. Second, it\u2019s not reliable. So a much higher degree of reliability is necessary. Third, it has not been trained on the domain-specific information that all of the relevant electrical equipment, and [it does not have the] deep understanding of the electric laws and electronics that would be necessary. And most importantly, it has also not been trained on use cases of best electricians dealing with similar problems from which AI could learn. So it is not designed for that task, and it hasn\u2019t been trained with high-quality, domain-specific data. All of those restrict your ability to use ChatGPT or similar tools, and that\u2019s the reason why whenever employers are given a push toward using them, the first thing they want to do is just use them for automation, because that just seems to be the path of least resistance. <\/p>\n<\/p>\n<p><strong>Sam Ransbotham:<\/strong> To think about that a little bit more, there\u2019s nothing that says that we couldn\u2019t train those models over those domain-specific knowledge bases. Maybe we\u2019re just early days, and that could come out. I think that\u2019s plausible, but I\u2019m not sure if [there are] economic incentives for people to do that.<\/p>\n<p><strong>Daron Acemoglu:<\/strong> The economic incentives are not there, because this is not the business model of the leading corporations. That data doesn\u2019t exist, and it won\u2019t exist unless we have property rights and data, and we have proper data markets. The current architecture of large language models may create hard limits on reliability, whereas in situations like this, reliability could be a very important constraint. <\/p>\n<p>For example, imagine we do this with nurses, and one in a thousand times, they give you the complete opposite of what they should do, and you poison the patient. I think one in a thousand seems very small, but, actually, in medical applications, that will be an unacceptably large casualty rate. So it\u2019s really a different architecture and different sort of preparation training of these models that may be necessary. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> I think your error rate is an interesting one, because I\u2019m not really sure what I think about that. Half the nursing students graduate in the bottom half of their class \u2014 that\u2019s just how averages work.<\/p>\n<p><strong>Daron Acemoglu:<\/strong> But as a result, we don\u2019t allow nurses to make those decisions at the moment. Except in a few cases where you have highly trained licensed practitioner nurses, nurses cannot prescribe drugs. They cannot make emergency decisions. When a patient is having problems, they have to wait for a physician to come. That\u2019s the margin that we\u2019re talking about. Nurse-complementary technology would expand what nurses do in those domains. No, you couldn\u2019t do that unless all of the nurses become even better trained than licensed practitioner nurses, or the AI models get much better. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> Let\u2019s push on the nursing example a little bit more. My daughter has recently learned how to drive. I\u2019ll make you nervous \u2014 I think we both live in the same area. She\u2019s a good driver though. But she hasn\u2019t seen millions of almost-wrecks yet, and I would love for her to have that experience. By analogy, the nurses may not have seen these esoteric cases in a way that we were just talking about \u2014 these AI models are fabulous at storing lots and lots of information and recalling that. <\/p>\n<p><strong>Daron Acemoglu:<\/strong> I think there are many things that can be done. The future of technology is rich. If you integrate AI with virtual reality, you can have personalized experiences where your daughter could experience very dangerous situations sitting in front of a computer. I can tell you from my own experience, when you get behind a wheel, you think you know, [but] you don\u2019t. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> We talked a little bit about incentives. Let\u2019s talk about measurement a little bit. I think [one of] the issues we\u2019ve always had is that we can measure the number of widgets, [but] we have a lot of trouble measuring the outputs of our knowledge economy. How important is measurement, and is there anything that we can do to try to improve that? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> I think measurement is very important, and there are some puzzles that we should bear in mind. I think these puzzles do feed into my concerns, and also skepticism, about some of the claims. We definitely do live in an age of innovation, according to many measures. <\/p>\n<p>If you look at the number of patents at the [U.S. Patent and Trademark Office], they have quadrupled over the last 40 years. We get an incredible array of new apps every day on our phones. We have much faster turnover of electronics in quite a significant way. When I use my iPhone that\u2019s a couple of years old, everybody says, \u201cWow, you\u2019re really missing out.\u201d <\/p>\n<p>When people were using rotary phones, dial phones, you could use the same model for 30 years, and nobody would bat an eye. So there is a sense in which we are getting a lot of innovations, but using the standard measures of economists, we don\u2019t see much improvement in productivity. <\/p>\n<p>In fact, we\u2019re having slower productivity improvements today than we did in the \u201950s, \u201960s, \u201970s, those boring pre-digital days. What\u2019s up with that? Well, the people from Silicon Valley and economists who are sympathetic to that perspective would say, \u201cThat\u2019s all a measurement problem. You\u2019re just not making allowance for how high quality some of the products you\u2019re getting now is, and the Bureau of Labor Statistics is overestimating inflation. You have in the middle of your palm, a supercomputer, [a] superpowerful machine that allows you to access information [that was] never possible before.\u201d So all of these things they think are the reasons why you shouldn\u2019t look at macroeconomic data; you should ignore all of the economists or data sources. There\u2019s some truth to that, but I think it can be exaggerated. <\/p>\n<p>We did not measure the benefits from antibiotics that well either, but you\u2019ve still got amazing improvements [in] many directions, in terms of GDP, in terms of output of the pharmaceutical sector and lives saved. Life expectancy increased tremendously with antibiotics. Well, life expectancy is not increasing. We\u2019re not seeing any of the AI-facilitated pharmaceuticals do anything yet. <\/p>\n<p>Perhaps time will change that, but we just don\u2019t have objective measures that show huge gains from AI as of now. I don\u2019t think that\u2019s just a measurement problem, but measurement can help [us] understand where the bottlenecks are and also improve perhaps certain assessments of \u2026 the impact of AI in different sectors. But I think a lot of it, again, comes down to what I was talking about: If you overdo automation, if you overdo information centralization, you\u2019re not actually going to get all that promised productivity boom. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> I\u2019m bought in. What do we need to do here? I mean, as an individual, what does an individual need to do, given that I can shake my tiny fist against the FAANG companies? What should individuals be doing here? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> I think a lot. At the end of the day, society consists of individuals. If a lot of individuals change their mind, that has an effect. Part of the reason why tech companies have so much power is because they have what Simon Johnson and I called in <cite>Power and Progress<\/cite> persuasion power. They have persuaded the rest of society that their intentions are benign, their technology is good, and they will not misuse it too much. There\u2019s a lot of counterevidence to that, but we still sort of believe it. We still believe the leading AI companies when they say, \u201cWe have this amazing godlike technology, believe that it\u2019s godlike, and that it will be used just in your service, your own personal god.\u201d <\/p>\n<p>Absolute power corrupts, absolutely. I don\u2019t know that we should really believe those claims. Different individuals will have to reach their own conclusions, but enough individuals, a critical mass of them, changing their views would have an effect through democratic process. <\/p>\n<p>Who are individuals who have a lot of say? Hundreds of thousands of people, perhaps more, who work as engineers and scientists in these corporations. They determine the direction of research. If they decided next year that they want to work not on automation and AGI but developing more pro-worker, pro-human technologies that will help workers and human decision makers and decentralization, that\u2019s what we would get. That\u2019s an individual decision. <\/p>\n<p>Another individual decision [involves] entrepreneurs. A lot of new ideas come from startups. Right now, startups are aligned with the big companies because their dream is to be bought up by the big companies. That\u2019s the way you become a billionaire right now. Again, that\u2019s a choice. Different values, different priorities, different regulatory systems. Perhaps we should really be much more vigilant in mergers and acquisitions, then that could lead to very different dynamics. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> Good. I hope our students are listening because I do think that most of our students are out there trying to come up with startups, with their goal of being an acquisition by one of these large companies. <\/p>\n<p><strong>Daron Acemoglu:<\/strong> If I wanted to be rich, that\u2019s what I would do, too. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> Maybe our measurement problem extends both to our productivity as well as to our incentives, if that\u2019s how we\u2019re measuring success. <\/p>\n<p><strong>Daron Acemoglu:<\/strong> Yeah. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> You mentioned regulation. Let\u2019s touch on that a minute. I tend to think we can have market forces perhaps do a better job of aligning incentives than regulation. What can regulation do here, particularly when we are dealing with goods that are not physical goods? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> I would like to say three things about regulation. First of all, regulation is always tricky. Look at Europe. Europe is so far behind in AI and many areas of tech because they\u2019ve not been very conducive to innovation via their regulatory system. Too many organizations, too much interference, that can be very bad. So you have to balance things. <\/p>\n<p>Second, some regulation on health-critical, information-critical, democracy-critical things is absolutely necessary. You cannot let AI models pretend to be doctors without having some sort of assessment that they are actually giving adequate information.<\/p>\n<p>We apply tremendous barriers to anybody becoming a quack doctor, while we should apply the similar standards to AI models. But, most importantly, we may need a change in the philosophy of regulation. Regulation should not be a reactive thing where we try to stop whatever AI companies are trying to do. I think we need proactive regulation that helps the AI industry move in a more socially beneficial direction. That starts by recognizing what this socially dimensional direction is. I\u2019ve argued it\u2019s pro-worker, new tasks, more decentralization. It then recognizes why the current playing field is tilted against it and tries in a soft way, without stopping or killing the market process, to correct those distortions and provide a living chance to the alternative directions.<\/p>\n<p><strong>Sam Ransbotham:<\/strong> That\u2019s a moment of hope there. That\u2019s good. Let me switch a little bit. Our show is <cite>Me, Myself, and AI<\/cite>. Let\u2019s let people get to know you a little bit. How did you get interested in these things? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> I\u2019ve always been interested in technology as the engine of the industrial revolution, of the rapid growth process, and that brought me, together with my studies of labor markets, to focus on automation. I\u2019ve been working on automation for over 20 years. Then, when AI models started making rapid advances in the mid-2010s, I got worried about what that would imply from this aspect of the future of work, what it would imply for wages and employment. And that made me invest more time and resources into AI and understanding AI, understanding societal implications, but also understanding the technology. And I think it\u2019s fascinating. It\u2019s super promising, but also super scary. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> I think that\u2019s a nice way to wrap up that balance that you keep coming back to. One of the things we like to do in the show is ask you a bunch of rapid-fire questions. [Tell us] the top thing that comes to mind. What did you want to be when you grew up? When you were a kid, what did you want to be when you first were thinking about a career? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> I wanted to become a social scientist. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> That worked out for you then. What\u2019s the biggest misconception that people have about artificial intelligence? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> That it will somehow completely replace humans. I think at the end, AI will be something that works alongside humans. The better we understand that and how to achieve that, the better we will be in shaping the future of work and the future of humanity. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> How do you personally use these tools? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> I use it just like other people. I sometimes ask questions to ChatGPT, and most of the time, I am both surprised by how good it is and disappointed that if I really trusted everything I got from it, I wouldn\u2019t be doing so well. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> I have to push back a little bit there. I find that if I know something about a subject and I ask a question, I\u2019m disappointed in the results. <\/p>\n<p><strong>Daron Acemoglu:<\/strong> Exactly. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> And if I don\u2019t know much about the subject, then I\u2019m impressed with the results. <\/p>\n<p><strong>Daron Acemoglu:<\/strong> That\u2019s it. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> That ought to worry me.  <\/p>\n<p><strong>Daron Acemoglu:<\/strong> Even when I know about the subject, I am impressed by how good it is able to synthesize the basic knowledge there, but it always pretends to know more and gives answers that are really incorrect because it\u2019s extrapolating too much. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> What has moved faster than you expected with artificial intelligence? <\/p>\n<p><strong>Daron Acemoglu:<\/strong> The large language models. Their reasoning capabilities are truly impressive. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> It\u2019s been great talking to you. This has been a fascinating conversation. I love your balance of both optimism and concern, and I think that\u2019s a nice way to wrap up this session. Thanks for taking the time to talk with us. <\/p>\n<p><strong>Daron Acemoglu:<\/strong> Thank you, Sam. This was a lot of fun. <\/p>\n<p><strong>Sam Ransbotham:<\/strong> Thanks for listening. <cite>Me, Myself, and AI<\/cite> Season 13 premieres on March 10. Please join us. <\/p>\n<p><strong>Allison Ryder:<\/strong> Thanks for listening to <cite>Me, Myself, and AI<\/cite>. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.<\/p>\n<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>In this bonus episode of the Me, Myself, and AI  [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[194],"tags":[],"class_list":["post-18538","post","type-post","status-publish","format-standard","hentry","category-graphic-design"],"acf":[],"_links":{"self":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/18538","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/comments?post=18538"}],"version-history":[{"count":0,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/18538\/revisions"}],"wp:attachment":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/media?parent=18538"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/categories?post=18538"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/tags?post=18538"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}