49 Years of Impactful Scholarship
Banner_Library2.jpg

ILJ Online

ILJ Online is the online component of Fordham International Law Journal.

Deepfakes and Democratic Erosion: Regulating AI-Generated Foreign Election Interference in a Fragmented Legal Landscape

The 2024 global election cycle was the “biggest election year in human history,” and represented the first major test of AI-driven disinformation on such a scale, with deepfakes contributing to misinformation campaigns but falling short of the widespread chaos some had predicted.[i] In the United States, an AI-generated robocall impersonating President Joe Biden urged New Hampshire Democratic primary voters not to participate, demonstrating how artificial intelligence can directly undermine voter turnout.[ii] In Slovakia’s parliamentary elections held shortly before, manipulated audio deepfakes falsely claiming election fraud circulated widely on social media.[iii] As generative AI advances, state-sponsored actors like Russia and China increasingly use deepfakes to interfere in foreign elections, threatening national sovereignty and public confidence in democratic institutions.[iv] Although domestic frameworks in the United States and European Union offer some degree of protections, there are still significant gaps in the global legal community that need to be addressed to better handle the cross-border threat posed by AI.[v]

Deepfakes are synthetically generated or altered media that convincingly mimic real individuals or events and have become a potent tool for election manipulation and eroded the line between authentic and fabricated content.[vi] During the 2024 U.S. presidential race, viral AI videos depicted Vice President Kamala Harris making inflammatory statements and were amplified by high-profile accounts.[vii] In India, deepfake videos of celebrities criticizing Prime Minister Narendra Modi were widely circulated during his campaign[viii]. These incidents foster distrust, deepen societal polarization, and weaken democratic legitimacy, with surveys showing that many Americans harbor fears about AI’s influence on elections.[ix] Such foreign-orchestrated activities may breach the customary international law principle of non-intervention, as established by the International Court of Justice in Military and Paramilitary Activities in and Against Nicaragua.[x] State-sponsored deepfakes designed to manipulate voter behavior or destabilize electoral processes could qualify as impermissible intervention against a state’s “political, economic, and cultural elements” under this doctrine.[xi] United Nations assessments further underscore AI’s role in exacerbating disinformation, calling for proactive measures to preserve electoral integrity.[xii]

Responses to these challenges reveal a fragmented regulatory landscape, with the United States favoring enforcement-driven measures and the European Union adopting a more preventive, risk-based model.[xiii] In the US, the Foreign Agents Registration Act mandates disclosure of foreign-influenced materials but provides no express guidance for AI-generated content.[xiv] Several states have enacted bans on deceptive election deepfakes, though federal preemption concerns and Section 230 immunity for platforms often constrain enforcement.[xv] The Federal Communications Commission addressed audio deepfakes by declaring AI-generated robocalls unlawful under the Telephone Consumer Protection Act.[xvi] Meanwhile, the EU’s Artificial Intelligence Act designates systems “capable of influencing elections” as high-risk, imposing obligations for transparency, risk assessments, and mandatory labeling of deepfakes.[xvii] The Digital Services Act further requires very large online platforms to mitigate systemic risks, including election-related disinformation.[xviii]

Notwithstanding these advances, persistent enforcement shortfalls and the lack of binding international coordination enable foreign actors to exploit gaps.[xix] Technical difficulties in detection, tensions with freedom of expression, and the absence of universal standards thwart effective countermeasures.[xx] The ICJ’s non-intervention jurisprudence provides a conceptual foundation but is limited in the way of modern guidance. Policymakers are pursuing a framework that establishes minimum standards for AI watermarking, attribution protocols, and cross-border enforcement cooperation.[xxi] While the EU rigorously implements its AI Act guidelines in electoral contexts, the US can contribute by advancing pending legislation such as the Protect Elections from Deceptive AI Act to mandate disclosures.[xxii] Voluntary industry accords would be a dynamic defense system and more responsive to the rapidly changing technological landscape that AI deals in.[xxiii] 

Hunter Holliday is a staff member of Fordham International Law Journal Volume XLIX.

[i] See A ‘super year’ for Elections, United Nations Dev. Programme, https://www.undp.org/super-year-elections (last visited Jan. 9, 2026).

[ii] See Steve Kramer, 39 FCC Rcd. 6004 (2024).

[iii] See Mark Scott, Deepfakes, distrust and disinformation: Welcome to the AI election, Politico (Apr. 16, 2024, at 6:30 CET), https://www.politico.eu/article/deepfakes-distrust-disinformation-welcome-ai-election-2024/.

[iv] See U.S. Department of Homeland Security, Off. of Intel. & Analysis, Homeland Threat Assessment 2025 26 (2024), https://www.dhs.gov/sites/default/files/2024-10/24_0930_ia_24-320-ia-publication-2025-hta-final-30sep24-508.pdf.

[v] See Ajay Patel, Freedom of Expression, Artificial Intelligence, and Elections 6 (2025), https://unesdoc.unesco.org/ark:/48223/pf0000393473.locale=en; see also Sofia Gracias, Comparing the EU AI Act to Proposed AI-Related Legislation in the US, https://businesslawreview.uchicago.edu/online-archive/comparing-eu-ai-act-proposed-ai-related-legislation-us.

[vi] Shanay Gracia, Americans in both parties are concerned over the impact of AI on the 2024 presidential campaign, Pew Rsch. Ctr. (Sept. 19, 2024), https://pewrsr.ch/4e8POz1.

[vii] See Scott, supra note 3.

[viii] See Meryl Sebastian, AI and deepfakes blur reality in India elections, BBC (May 15, 2024), https://bbc.com/news/world-asia-india-68918330.

[ix] See Gracia, supra note 6.

[x] See Judgment, 1986 I.C.J. 14, ¶ 202 (June 27) (prohibiting interference in a state’s domestic and foreign affairs).

[xi] See id.

[xii] See Volker Türk, U.N. High Comm’r for Hum. Rts., Human Rights: A Path for Solutions, at 25, U.N. Doc. HRC/NONE/2024/6/Rev. 1 (Sept. 2024), https://www.ohchr.org/sites/default/files/2024-09/hc-visionstatement-2024-en.pdf.

[xiii] See Lawrence Norden, States Take the Lead in Regulating AI in Elections — Within Limits, Brennan Ctr. (Aug. 7, 2024), https://www.brennancenter.org/our-work/research-reports/states-take-lead-regulating-ai-elections-within-limits.

[xiv] See 22 U.S.C. § 611.

[xv] See e.g., Cal. Elec. Code § 20510 (West 2025) (California anti-deepfake law struck down in Kohls v. Banta, 752 F.Supp.3d 1187, 1197 (E.D. Cal. Oct. 2, 2024) because of free speech concerns); see also The Conference Board, Federal Judge Strikes Down California Deepfake Law, (Aug. 7, 2025), https://www.conference-board.org/research/CED-Newsletters-Alerts/federal-judge-strikes-down-california-deepfake-law.

[xvi] Press Release, Fed. Commc’n Comm’n, FCC Makes AI-Generated Voices in Robocalls Illegal (Feb. 8, 2024), https://www.fcc.gov/document/fcc-makes-ai-generated-voices-robocalls-illegal.

[xvii] See Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), 2024 O.J. (L 1689) 1, 9.

[xviii] See Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC, 2022 O.J. (L 277) 1, 79.

[xix] See Sam Stockwell, AI-Enabled Influence Operations: Safeguarding Future Elections, Ctr. for Emerging Tech. & Sec. (Nov. 13, 2024), https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-safeguarding-future-elections.

[xx] See World Econ. F., The Global Risks Report 2024 18 (2024), https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf.

[xxi] See A Tech Accord to Combat Deceptive Use of AI in 2024 Elections, Munich Sec. Conf. (Feb. 17, 2024), https://securityconference.org/en/aielectionsaccord/accord/.

[xxii] See Protect Elections from Deceptive AI Act, S. 2770, 118th Cong. (2023-2024).

[xxiii] See Juliane Müller, What Have we Learned About AI in Elections?, Int’l Inst. for Democracy & Electoral Assistance (Nov. 27, 2025), https://www.idea.int/news/what-have-we-learned-about-ai-elections.


This is a student blog post and in no way represents the views of the Fordham International Law Journal.