Beyond the Model — Why Responsible AI Must Address Workforce Impact


For the fifth year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. In prior years, we examined organizational RAI maturity; third-party, generative, and agentic AI risks; and core AI governance pillars, including accountability, explainability, and oversight. Since our project began, AI use has rapidly spread among organizations of every size, sector, and geography. At the same time, early fears have begun to materialize related to its impact on the workforce, with several companies announcing substantial layoffs while citing AI-enabled efficiency gains.

Given the growing concerns over how much human workers will be affected by AI, we asked our panel to react to the following provocation: Responsible AI practice should address workforce impact, not just AI system risk. Nearly 80% of our panelists agree or strongly agree with the statement. Our panel previously highlighted that sound AI governance asks not only how a technology is designed or deployed but whether it should be used at all. This year’s panel extended that logic, stressing that responsible AI must look beyond safe systems to the real-world consequences for workers and economic stability. Below, we share our panelists’ insights and offer our practical recommendations for organizations seeking to address workforce impact as part of their responsible AI governance.

Responsible AI must be sociotechnical, not just technical. Our experts believe that AI will change the future of work. Katia Walsh, AI lead at Apollo Global Management, argues that “we are on the precipice of a societal revolution that will profoundly alter ways of working,” and MIT professor Sanjay Sarma agrees that “implications on jobs will be significant.” In fact, Mike Linksvayer, vice president of developer policy at GitHub, points out that “as AI is rapidly incorporated into day-to-day work, it is already reshaping how judgment is exercised, how quickly people learn, and what individuals can reasonably attempt,” and he used software development as a clear example. Because AI reorganizes workflows, fragments tasks, and redistributes power between workers and organizations, our experts argue that RAI cannot be defined in solely technical terms.

As senior AI executive David Hardoon explains, “Far too often, AI is mistaken for a mere technology when in reality it is a much broader ecosystem involving people, processes, governance, and society at large.” Simon Chesterman, National University of Singapore’s vice provost, says that “if responsible AI only means making the model safe, accurate, and compliant, we’ve defined the problem too narrowly,” adding, “If we don’t address the human consequences, responsible AI becomes a technical checklist with a moral halo.” Ranier Hoffmann, chief data officer of EnBW, puts it another way: “Responsible AI is ultimately about governing sociotechnical systems, not just compliant algorithms.” For Wipro’s chief product officer Jai Ganesh, “responsible AI is about ensuring innovation benefits society as a whole, including the people whose work it transforms.” In other words, responsible AI is not just about what a system does but about what it does to people; overlooking this distinction carries real socioeconomic risks.

The current RAI discourse has not kept pace. Renato Leite Monteiro, vice president of privacy, data protection, AI, and intellectual property at e&, regrets that the “conversation has been dominated by system-level concerns like bias, explainability, and safety.” While these considerations are important, he says, they are “incomplete” because AI “reshapes how people work, what skills matter, who gets opportunities, and who gets left behind.” Bruno Bioni, founder and director of Data Privacy Brasil, agrees, cautioning that by focusing on narrow technical and model-centric risks like bias mitigation, privacy, robustness, or model safety, “governance frameworks risk collapsing into a narrowly technocratic approach.” Naomi Lariviere, ADP’s chief product owner, expands on that, saying, “If we only focus on guardrails, we miss how AI reshapes accountability, advantage, and day-to-day experience.”

Workforce impact is a core AI risk to social and economic stability. Although proponents of rapid AI adoption frequently cite efficiency and productivity as core motivations, our experts warn that a failure to address workforce impact could undermine these goals and exacerbate economic issues. OdiseIA president Idoia Salazar illustrates the scope of the problem, noting that “AI can reshape tasks and roles, intensify monitoring and productivity pressure, shift decision-making power away from workers, and produce uneven impacts across different groups.” As Yan Chow of Automation Anywhere puts it, “If AI maximizes efficiency but decimates consumer purchasing power or sparks unrest, it fails as a sustainable business tool.” Hoffmann goes further, arguing that “workforce impact is not a ‘soft’ concern but rather a core system design parameter” and cautioning that organizations that “deploy AI where it adds little value but creates organizational strain … risk weaker oversight and poorer outcomes.”

The business case for taking workforce impacts seriously may already be playing out in practice. Alyssa Lefaivre Škopac, director of trust and safety at Alberta Machine Intelligence Institute, raises the issue of companies declaring to be “AI first” as they cut workers only to “rehire when the capabilities don’t match the hype.” She says this “fundamental misunderstanding of AI capabilities and human talent” comes with “real economic and human cost.” She adds, “Thoughtfully navigating workforce impact may be foundational to whether AI actually delivers the positive impact we’re all hoping for.” Pierre-Yves Calloc’h agrees that “workforce integration thinking is a critical factor in the long-term success of any AI initiative,” while Stanford CodeX fellow Riyanka Roy Choudhury cautions that “ignoring the impact on jobs may eventually contribute to broader economic instability.”

In response to that concern, many experts emphasize that reskilling and upskilling workers is crucial to mitigating AI’s potentially negative workforce effects. Ganesh recommends implementing a two-pronged strategy that focuses on bias, safety, privacy, and security issues along with the workforce impact by “upskilling, educating employees to work confidently alongside intelligent systems, and being transparent about how AI is used in decision-making.” University of Helsinki professor Teemu Roos similarly emphasizes that “the primary concern is ensuring sufficient support for upskilling and reskilling among the workforce to address rapid change and increasing complexity.” Not all experts are optimistic about this approach, however. Chow observes that “technological progress is exponential, while human reskilling remains linear,” warning that “unless responsible AI explicitly mandates accelerating workforce readiness to match this velocity, the skills gap will become an unbridgeable chasm, rendering upskilling a hollow promise.”

Responsibility for workforce impact should be distributed. Given the substantial challenges that AI poses to the future of work, Kirtan Padh, scientific collaborator at AI Transparency Institute, asks, “Who is responsible for any negative impacts on the workforce?” Are businesses, governments, or both? IMD Business School professor Öykü Işik believes that addressing AI’s workforce impact “is a matter of formal corporate governance” that “undoubtedly rests with the board and executive leadership.” GovLab cofounder and chief research and development officer Stefaan Verhulst agrees that “companies must improve corporate policies that protect and nurture their employees.” Yet Nasdaq’s head of AI research and engineering Douglas Hamilton calls for a division of responsibilities, arguing that AI-related job displacement should be the primary concern of “governments, universities, and nonprofits,” whereas “responsible companies need to fully capture its value in unequivocal ways.”

Several experts argue that companies cannot be expected to bear this burden alone, while pointing to the role of policy and lawmakers. Wharton School professor Kartik Hosanagar argues that “policy makers hold the primary responsibility” for the workforce impacts of AI. At the policy level, Ganesh calls for “preparing the labor market for collaborating with AI by identifying future skills, adapting curricula, and supporting transitions,” while Sarma argues that this preparation requires “everything from completely rethinking our educational paradigms to reskilling, unemployment support, and fundamental questions about the future of the economy.” Hardoon says, “A truly responsible approach demands holistic governance, AI literacy training, and policies that protect workers and preserve human agency.”

Several experts also caution that the stakes of inaction are potentially high. ForHumanity founder Ryan Carrier warns that failure to address workforce impact “will result in increased economic inequality as the wealth created by AI would be increasingly concentrated.” He believes that “a legislative policy response and consumer choice have a role to play in signaling whether we want corporations to continue to employ humans, and to what degree.” Bioni adds that “labor unions and worker associations can play a critical role through collective bargaining agreements [including] provisions on prior consultations before AI deployment, access to information about automated decision-making systems, and limits on algorithmic surveillance.”

Recommendations

In summary, we offer the following recommendations for organizations seeking to address workforce impact as part of their responsible AI efforts:

1. Increase the scope of RAI practices beyond models. Expand the definition of responsible AI to encompass not just model performance but the full ecosystem of people, processes, and institutions that shape how AI is built, deployed, and experienced. Workforce impact is a core organizational design parameter that should be proactively embedded in AI governance frameworks from the outset. Governance frameworks that focus exclusively on technical performance miss the deeper question of what AI does to workers, organizations, and economic life. Workforce impact must be evaluated at the board level alongside business outcomes.

2. Include workforce impact as part of your AI strategy. Organizations are racing to create strategies for deploying AI tools and upskilling staff on their use. Plans for AI that change the nature of work should be accompanied by plans for human reskilling, redeployment, and transition strategies. However, as Chow suggests, reskilling can’t or won’t keep pace with technological advances, so companies need to look at other options to address workforce impact. Include workforce metrics, such as displacement rates and reskilling completion, alongside technical performance and value measures when tracking implementation. Companies should ensure their strategy accounts for the hidden costs of large-scale workforce impact, including reputational damage, reduced consumer trust, and growing regulatory risk. These potential downsides may ultimately outweigh the short-term efficiency gains.

3. Evaluate worker impact alongside other product-level risks. Product evaluations must move beyond technical performance to include workforce effects, including overreliance, skills atrophy, disempowerment, “AI brain fry,” and work intensification. These factors should be part of risk identification and mitigation development. Transparency about how AI is used in decision-making, what tasks it will reshape or eliminate, and mitigation plans (e.g., transition support) should be built into deployment plans and considered as part of the business case for the AI use. Workforce impacts must be explicitly considered as part of go/no-go decisions before pursuing specific AI tools.

4. Make employees part of the conversations about workforce impact. Organizations have an obligation to communicate openly with workers who may be affected by AI — not as a courtesy but as a core governance responsibility. Workforce impact statements should be part of organizational AI strategies, alongside business value statements. Otherwise, responsible AI remains a conversation that happens above workers rather than with them. And in some jurisdictions, this engagement may not be optional. Workers’ councils are increasingly important to shaping AI strategy, especially in cases where worker displacement may occur.

5. Assign clear leadership accountability for workforce impact. Addressing workforce impact cannot be treated as a shared responsibility that belongs to everyone — and therefore no one. While it requires coordinated effort across human resources, operations, legal, technical, and business leadership, cross-functional collaboration without named ownership is how consequential issues fall through the cracks.

Organizations must designate a specific leader, with real authority and board-level visibility, who is accountable for developing and executing a workforce impact strategy. To address externalities, they’ll need to proactively engage with policy makers, industry bodies, and labor organizations. This leader should be prepared to make the case, to shareholders and executives alike, that the hidden costs of large-scale displacement — the erosion of in-house expertise needed to verify AI outputs, reputational damage, eroded consumer trust, and mounting regulatory exposure — will outweigh the short-term efficiency gains that drove the cuts in the first place. If no single leader owns workforce impact, it will remain a talking point in governance documents rather than a genuine organizational commitment.

By Published On: Aprile 21, 2026Categories: DesignCommenti disabilitati su Beyond the Model — Why Responsible AI Must Address Workforce Impact

Share This Story, Choose Your Platform!

About the author : admin