{"id":21536,"date":"2026-04-21T21:11:34","date_gmt":"2026-04-21T21:11:34","guid":{"rendered":"https:\/\/ideainthebox.com\/index.php\/2026\/04\/21\/weaponized-deepfakes-ai-artificial-intelligence\/"},"modified":"2026-04-21T21:11:34","modified_gmt":"2026-04-21T21:11:34","slug":"weaponized-deepfakes-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/ideainthebox.com\/index.php\/2026\/04\/21\/weaponized-deepfakes-ai-artificial-intelligence\/","title":{"rendered":"Weaponized deepfakes"},"content":{"rendered":"<div>\n<p>For years, experts have warned that deepfakes\u2014AI-generated videos, images, or audio recordings of people doing or saying things they haven\u2019t actually done in real life\u2014could be deployed in malicious ways.\u00a0<\/p>\n<p>These dangers are now here. Improvements in deepfake technology, and the widespread availability of easy-to-use and cheap (or free) generative models, have made it easier than ever for anyone to fake reality in a way that\u2019s increasingly difficult to spot.<\/p>\n<p>We\u2019re not just talking about <a href=\"http:\/\/o\/\">AI slop<\/a>, the often obviously fake content that has taken over the internet. Rather, weaponized deepfakes\u2014from sexually explicit images to scam posts to political propaganda\u2014may look startlingly real. There are already examples around the world of their <a href=\"https:\/\/www.boomlive.in\/decode\/exclusive-meta-ais-text-to-image-feature-weaponised-in-india-to-generate-harmful-imagery-26712\">inciting violence<\/a>, <a href=\"https:\/\/www.theguardian.com\/technology\/2024\/nov\/26\/far-right-weaponising-ai-generated-content-europe\">trying to change<\/a> minds (and <a href=\"https:\/\/www.thesaturdaypaper.com.au\/news\/politics\/2026\/01\/31\/pauline-hanson-and-the-ai-slopaganda-election\">maybe even votes<\/a>), and <a href=\"https:\/\/www.i24news.tv\/en\/news\/israel\/diplomacy-defense\/artc-is-benjamin-netanyahu-dead-fake-news-becomes-the-loudest-voice-on-social-media\">generally sowing mistrust<\/a>.\u00a0<\/p>\n<p>That\u2019s why experts worry that weaponized deepfakes will further crater critical thinking skills, as well as our trust in institutions and each other. This has dire effects for society and governance\u2014and, of course, for the people targeted. As with many other examples of technology\u2019s harms, the human impacts will weigh disproportionately on women and marginalized groups; though the technology has evolved in the past few years, a 2023 <a href=\"https:\/\/www.securityhero.io\/state-of-deepfakes\/\">study<\/a> found that 98% of deepfakes were pornographic in nature, and 99% depicted women.\u00a0<\/p>\n<p>Just take Grok. Since Elon Musk launched the \u201cedit image\u201d<em> <\/em>function of this AI chatbot late last year, users have created millions of sexualized images, including many of children and women; one <a href=\"https:\/\/aiforensics.org\/work\/grok-unleashed\">report<\/a> estimated that 81% of these Grok-produced images depicted women. Despite widespread criticism, xAI\u2019s initial response was to limit the feature to paying members; it has since blocked the nudity feature in jurisdictions where it is illegal.\u00a0<\/p>\n<p>There\u2019s also been an explosion of political deepfakes. The Trump administration, for example, has regularly produced and shared AI-generated images and videos. Not all of them are even meant to look real, but others appear to be designed to sway public opinion and even humiliate the person depicted.\u00a0<\/p>\n<p>In January, meanwhile, Texas attorney general Ken Paxton shared a <a href=\"https:\/\/www.facebook.com\/reel\/776767768040302\">video<\/a> appearing to show his opponent in the Republican primary for a US Senate seat, Senator John Cornyn, dancing with Representative Jasmine Crockett, a contender for the Democratic nomination. But this never happened\u2014a fact the ad did not disclose clearly.\u00a0<\/p>\n<p>Suggested solutions include instituting new <a href=\"https:\/\/www.technologyreview.com\/2025\/07\/15\/1120094\/ai-text-to-speech-programs-could-one-day-unlearn\/?utm_source=the_download&amp;utm_medium=email&amp;utm_campaign=the_download.unpaid.engagement&amp;utm_term=*%7CSUBCLASS%7C*&amp;utm_content=*%7CDATE:m-d-Y%7C*\">technical safeguards<\/a> and detection methods at the big AI firms, encouraging users to take more protective actions, and crafting new legislation or applying existing regulatory frameworks, like copyright law, to the issue.\u00a0<\/p>\n<p>But these all have limits. Technical solutions can be bypassed; for instance, bad actors can simply switch to open-source models built without safeguards. Getting people to change how they behave, such as by watermarking photos or posting less personal information online, is simply unrealistic. Stronger regulations require enforcement\u2014and while President Trump has signed legislation that criminalizes deepfake porn, his administration continues to post<em> <\/em>other types of harmful deepfakes. In late January, for instance, the White House shared an <a href=\"https:\/\/www.nytimes.com\/2026\/01\/22\/us\/politics\/nekima-armstrong-photo-white-house.html\">altered<\/a> image of a Minneapolis civil rights lawyer, darkening her skin and changing her facial expression from one of calm to exaggerated crying.<\/p>\n<p>The problem could get much worse\u2014and soon. There are high-stakes midterm elections in the United States later this year, and the federal agencies that traditionally addressed elections-related information integrity have been weakened. So have many outside research groups dedicated to fact-checking and fighting election-related disinformation.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>For years, experts have warned that deepfakes\u2014AI-generated videos, images, or  [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[226],"tags":[],"class_list":["post-21536","post","type-post","status-publish","format-standard","hentry","category-technology"],"acf":[],"_links":{"self":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/21536","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/comments?post=21536"}],"version-history":[{"count":0,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/posts\/21536\/revisions"}],"wp:attachment":[{"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/media?parent=21536"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/categories?post=21536"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ideainthebox.com\/index.php\/wp-json\/wp\/v2\/tags?post=21536"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}