AI Geopolitics and Human Rights: Why the Law Lags Behind?

Author: Anushka Saraswat

The technological revolution around Artificial Intelligence (AI) has inevitable implications for the future of Economic and Social Rights (ESRs), impacting nearly every aspect of human life, the global economy, the rules of competition, and society. Advancements in AI research and development are driven by geopolitical competition, similar to other dual-use technologies that can serve both civilian and military applications. Political movements and labour concerns are shaping the geopolitics of AI, besides new rules of war and power competition. There is inadequate attention given to the gaps in legal protection and ethical considerations in AI policy influenced by the Global North.

U.S.-China AI Competition and International Human Rights

AI is the new frontier of U.S.-China power competition, expected to redefine geopolitical rivalry and its far-reaching implications for international law and global governance. While the U.S. invests in the latest large language models (LLMs), semiconductor chips, and AI infrastructure, China is further ahead in application-oriented integration, with pilots in its smart cities that fuse traffic cameras and AI sensors in autonomous vehicles, to name a few.

The President Trump-led U.S. Government, known for its stark inward-looking policies and pro-innovation stance, introduced a bill that seeks to bar states from regulating AI models and automated decision systems. Under the Chinese Communist Party, China continues to be at the forefront of AI integration in the manufacturing industry and infrastructural advancement; however, the rights and interests of workers, trade unions and protestors are at huge risk.

Arguably, the U.S.-China AI competition uniquely challenges international human rights law and its universal application, be it in service or manufacturing industries. The intense competition persisting in the AI revolution is defined in terms of military power and corporate dominance; the risks for human rights and sustainability have only begun to surface for workers in these countries, and those fuelling the technical developments in the developing and underdeveloped world that the international human rights regime aims to protect.

A major part of International Law consists of rules and principles governing the conduct of nations with each other, individuals, and international organisations alike. In the context of international human rights law, nations agree to a set of obligations, principles, and rules to establish a clearly articulated policy at the national level to reduce oppressive and exploitative behaviour in pursuit of economic development and human welfare. In a report byChatham House, it is argued, “While human rights do not hold all the answers, they ought to be the baseline for AI governance. International human rights law is a crystallisation of ethical principles into norms, their meanings, and implications, well-developed over the last 70 years.” The respect of obligations remains crucial at a time when the United Nations (UN) faces functional and operational challenges due to wars, resulting in the United Nations. Human Rights Council (UNHRC) suffering from politicisation and a lack of political will.

At present, the law on AI governance finds its source in soft law instruments or “ethical guidelines” produced by governments, international organisations, and civil society, such as European Union (EU) Ethics Guidelines for Trustworthy AI, OECD AI Principles, G20 AI Principles, Global Partnership on AI (GPAI), UN Guiding Principles on Business and Human Rights (UNGPs), UNESCO Ethics of AI. To add more, some diplomatic statements in the AI governance space include the GPAI Ministerial Declaration, G7 Ministers Statement 2023, Bletchley Declaration 2023, and Seoul Ministerial Declaration (2024). The existing frameworks are inadequate to address challenges for the protection of human rights. While soft law instruments evolve into customary law, the aftermath of AI requires binding treaties on regional and global levels.

The Risks for the Global South

The risks associated with the AI revolution and U.S.-China competition are multifaceted, ranging from environmental degradation to the protection of human rights and implications for the Sustainable Development Goals Agenda of 2030. A lack of ethical regulation of AI exacerbates the inequalities and digital North-South divide, volatility of markets in Asia and Africa, manipulation by algorithms, the uncontrolled nature of self-aware AI and loss of human influence and autonomy. One of the primary issues in AI integration is the uncertain viability of the ethical framework and regulations with very different models of socio- economic growth in Global South countries. In an IMF report titled “Gen-AI: Artificial

Intelligence and the Future of Work,” the organisation flags risks associated with human, business, and labour rights, leading to the wipeout of 60% white collar entry-level jobs in markets. Even though many analysts predict that the impact of AI on blue-collar jobs will be minimal, the integration of autonomous machines and robots in mechanical jobs exposes workers to risks of replacement.

The AI revolution is not only competitive but also a tool for geopolitical dominance. The mismatch between the slow adaptability of the law that relies on due process, non- discrimination, human dignity, and protection of the marginalised and underprivileged and AI race is grave. The frameworks on AI Governance and Ethics are non-binding and fragmented, with insufficient engagement with domestic laws and institutions in the Global South, creating a ‘black hole’ for enforcing accountability and ensuring protection of human rights.

To take effective measures, the nations must facilitate cooperation and exchanges related to judicial pronouncements, institutional activism, legislative will, and civil society participation across borders.

The Way Forward

The geopolitical race in the AI revolution must be evaluated beyond sealed aspects of dominance and warfare, so it must be saved on ethical grounds, that is, to serve humanity. In the race between global powers like the U.S. and China, the marginalised communities in the Global South bear the heaviest burden, be it job losses, widening digital divides and socio- economic disparities, and regulatory challenges that their legal systems are yet to catch up with. The workers and population in these countries are at the forefront of the AI challenge. The existing soft law instruments, ethical frameworks, and diplomatic statements, however noble in intent, are inadequate and toothless in their purpose. AI governance must be through a treaty-level binding obligation with a meaningful inclusion of developing nations in devising the governance design. Additionally, the international human rights law, premised on universality and dignity, must evolve from ethical aspiration to operational mandate. What this calls for is a global compact prioritising human dignity as the foundation of AI regulation, requiring impact assessments in its deployment, and empowering domestic institutions in the Global South to hold corporations and governments accountable. Until major powers subjugate geopolitical ambition to shared human welfare, the AI revolution will stratify, not uplift, humanity. The choice is clear: regulate AI with the same urgency it is developed or watch the promise of progress become privilege for the few.

Previous
Previous

Evaluating Public Policy Interventions for WomenEntrepreneurship and Inclusive Growth in India

Next
Next

Corporate Responsibility in Public Health, Sanitation, and WASH: Addressing India's Social Infrastructure Deficit