Caixin

Weekend Long Read: Why, When, and How to Regulate AI

Published: Apr. 19, 2025  9:00 a.m.  GMT+8
00:00
00:00/00:00
Listen to this article 1x
Debate should move away from abstract consideration of what rules might constrain or contain AI behavior, and get into the more practical challenges of lawmaking. Photo: AI generated
Debate should move away from abstract consideration of what rules might constrain or contain AI behavior, and get into the more practical challenges of lawmaking. Photo: AI generated

The better part of a century ago, science fiction author Isaac Asimov imagined a future in which robots have become an integral part of daily life. At the time, he later recalled, most robot stories fell into one of two genres. The first was robots-as-menace: technological innovations that rise up against their creators in the tradition of “Frankenstein,” but with echoes at least as far back as the Greek myth of Prometheus, which had been the subtitle of Mary Shelley’s 1818 novel. Less common, a second group of tales considered robots-as-pathos — lovable creations that are treated as slaves by their cruel human masters. These produced morality tales about the danger posed not by humanity’s creations but by humanity itself.

loadingImg
You've accessed an article available only to subscribers
VIEW OPTIONS

Unlock exclusive discounts with a Caixin group subscription — ideal for teams and organizations.

Subscribe to both Caixin Global and The Wall Street Journal — for the price of one.

Share this article
Open WeChat and scan the QR code
DIGEST HUB
Digest Hub Back
Explore the story in 30 seconds
  • Isaac Asimov introduced the idea of morally neutral, engineer-designed robots governed by Three Laws of Robotics, sparking debates on AI ethics and regulation.
  • Efforts to regulate AI include ethical principles focusing on human control, transparency, safety, accountability, non-discrimination, and privacy, but their implementation and necessity remain debated.
  • Approaches to AI regulation explore balancing innovation, societal values, and governance, highlighting challenges like timing of rules, risk management, and limiting AI’s application in public and moral domains.
AI generated, for reference only
Explore the story in 3 minutes

The article explores the ethical and regulatory challenges posed by artificial intelligence (AI), delving into past, present, and future efforts to manage its development and applications. Isaac Asimov’s 20th-century creation of the "Three Laws of Robotics" helped frame many of these debates. His fictional framework imagined robots as morally neutral tools programmed to prioritize human safety, obedience, and self-preservation in that order [para. 1][para. 2]. While culturally influential, these laws were neither enforceable nor practical as real-world regulations. Instead, they inspire discussions about the constraints and expectations for AI behavior, emphasizing the complexities of embedding ethical principles into technology [para. 3][para. 4].

Attempts to codify AI ethics began in earnest in the early 2000s, culminating in international efforts such as the European Union's Roboethics Roadmap (2006) and ongoing principles like human control, transparency, safety, accountability, non-discrimination, and privacy [para. 5][para. 6]. Industry initiatives from companies like Microsoft, IBM, and Google emphasize responsible development practices, while governments worldwide have issued frameworks to encourage ethical AI. Key examples include the European Union’s Ethics Guidelines for Trustworthy AI (2019) and New Zealand's Algorithm Charter (2020) [para. 6][para. 7]. However, translating these guidelines into practical implementations has been slow, with regulators and jurisdictions differing widely in their approaches [para. 8].

The text examines contrasting global regulatory trends. The EU adopts a cautious, human-rights-focused approach through mechanisms like the General Data Protection Regulation (GDPR) and its draft AI Act, which classifies AI risks into tiers and bans some harmful applications such as real-time biometric surveillance. In contrast, the U.S. leans on market-based approaches, preferring targeted interventions without overarching federal AI laws. Meanwhile, China blends AI-driven innovation with state control, reflecting its sovereign and market ambitions [para. 9][para. 10].

The "Collingridge Dilemma" sheds light on regulatory timing challenges. Early interventions may misjudge technological risks, while later actions are often costly and less effective. Risk management strategies like regulatory sandboxes (initially popular in fintech) can help test AI technologies safely. However, the inadequacy of pre-emptive measures, such as banning general AI research, has highlighted the difficulty of regulating rapidly advancing fields [para. 11][para. 12].

One area requiring stricter regulation involves inherently governmental decisions, such as algorithms used in public services. Missteps with AI in benefit fraud detection and educational grading reveal the necessity of maintaining human accountability and transparency in the decision-making process. Documents such as New Zealand's Algorithm Charter offer frameworks for ensuring that humans oversee and remain responsible for decisions made in consultation with algorithms [para. 13]. Similar ethical dilemmas arise in defining red lines for AI's use in military applications or interactions with humanoid robots [para. 13][para. 14].

The article acknowledges that AI regulation involves balancing competing demands: encouraging innovation while avoiding risks. Red-line regulations, such as prohibiting autonomous decision-making in warfare, must be supplemented by broader measures ensuring accountability and explaining decisions to the public. On the other hand, refraining from premature or excessive regulation can allow for more adaptability in policymaking, though it may also delegate critical legal and political considerations to courts [para. 14][para. 15].

Ultimately, the article argues that existing ethical frameworks must evolve into enforceable laws tailored to specific contexts. Public and private sectors should collaborate to manage risks, prohibit harmful practices, and establish accountability mechanisms. The text concludes by asserting that regulatory efforts must move beyond theoretical principles toward jurisdiction-specific laws that reflect practical realities. As AI becomes more integrated into society, regulatory frameworks will need to grow more diverse and sophisticated than the "three laws" that first popularized ethical thinking around machines [para. 16][para. 17][para. 18].

AI generated, for reference only
Who’s Who
Microsoft
According to the article, Microsoft has developed its own Responsible AI Principles, which were published in the first half of 2018. These principles are part of broader efforts by companies and organizations to establish guidelines and frameworks for the ethical development and deployment of AI technologies.
IBM
The article mentions that IBM published its *Principles for Trust and Transparency* in the first half of 2018. This document is part of broader efforts by companies to establish guidelines and frameworks focusing on responsible AI development and ethical practices.
Google
The article notes that Google published its "AI Principles" in the first half of 2018, part of a broader trend of individual companies drafting guidelines for ethical AI use. These principles emphasize responsible innovation in AI development.
OpenAI
The article mentions OpenAI as the developer of GPT (Generative Pre-trained Transformer) models, referencing GPT-4 specifically. It highlights a March 2023 open letter from the Future of Life Institute, signed by individuals including Elon Musk, calling for a six-month halt on developing generative AI systems more powerful than GPT-4, raising concerns about AI regulation, potential risks, and ethical limits.
AI generated, for reference only
Subscribe to unlock Digest Hub
SUBSCRIBE NOW
NEWSLETTERS
Get our CX Daily, weekly Must-Read and China Green Bulletin newsletters delivered free to your inbox, bringing you China's top headlines.

We ‘ve added you to our subscriber list.

Manage subscription
PODCAST
Caixin Deep Dive: Former Securities Regulator Yi Huiman’s Corruption Probe
00:00
00:00/00:00