This Article examines the legal and constitutional implications of a signifcant yet underexplored development in U.S. foreign relations: the emergence of algorithmic foreign policy, where artifcial intelligence (AI) systems are increasingly infuencing or autonomously generating decisions traditionally reserved for human policymakers. From autonomous weapons systems to AI-enhanced surveillance pacts and predictive diplomacy tools, executive agreements now encode algorithmic logic into national security and foreign policy. The constitutional and legal frameworks that govern these executive agreements remain rooted in assumptions of human discretion, deliberative process, and political accountability. This study argues that existing doctrines, including the political question doctrine and the War Powers Resolution, are ill-equipped to regulate AI-infused foreign relations. It offers an innovative framework for algorithmic legal accountability, reconciling emerging AI capabilities with the principles of separation of powers, treaty processes, and transparency norms. Drawing on case studies involving autonomous drone strikes, cyber operations, and AI-led diplomatic communications, this Article identifes doctrinal gaps and proposes legal reforms to ensure that AI-augmented executive agreements remain constitutionally constrained, democratically legitimate, and subject to judicial review.
Algorithmic Foreign Policy and Executive Agreements: Reassessing Legal Accountability in the Age of Artificial Intelligence
13 Feb 2026