Jennifer Zhu Scott: AI’s Risks Come From People, Not Killer Robots
“Self-aware robots with Artificial General Intelligence will destroy humanity.” We all have heard this version of artificial intelligence (AI) dystopia in science fiction, movies or even from respected scientists and philosophers. But I argue a completely different version of AI dystopia could arrive much sooner. So soon, it could be within our or our children’s lifetime. Unfortunately, most of us are distracted by the binary thinking of machine versus human, bedazzled every time some new technological capacities are achieved, and neglect to pay thoughtful attention to this forthcoming unfolding crisis.
AI technologies, like all the tools we have invented before, are agnostic. We as a species, not the “evil” sci-fi robots, are capable of making our societies worse. Instead of the abrupt appearance of a new class of superintelligent silicon beings declaring war on humans, what we will see is a continuation of the human nature that has for thousands of years created inequalities and pain across all societies, merely veneered, compounded and amplified by irresponsible and unethical uses of AI and genetic editing. This “dystopia” is called Artificial Biological Inequality. It is upon us, and it could be irreversible.
It is a human condition that we exaggerate dangers from outside and underestimate the threats within. This is one of the reasons why we are so fixated on potential conflicts between machine versus human. Our tribal instinct makes the idea of an ultimate fatal threat coming from a completely unrelated class of creatures much more acceptable than admitting our own ability to destroy. History has repeatedly proven that the most devastating hostilities toward humans were from other humans, instead of nature, other species, or, in this case, robots. History has also proven that if some minorities can hold on to unfair advantages to advance themselves against the others, most do.
It is also a human condition to fall into binary thinking when encountering complex situations because our brain craves clarity. It feels safe when things are black or white, yes or no. The “machine versus/or human” approach is too simplistic to understand this complex domain. Despite the rapid development of AI (and regardless of how intellectually stimulating the topic of AGI is), the technologies are still mainly single-domain-optimizers, or ANI — Artificial Narrow Intelligence. ANI means that machine intelligence can perform at an equal or superior level compared to humans at a certain specific task. The most powerful AI in the world, AlphaGo Zero, is peerless in playing the game Go, but useless at almost everything else. Siri or Alexa are learning fast every day, but because of the infinite topics users expect them to cover, they are still only accurate on a limited number of verticals.
In a domain of uncertainty, directional thinking is more useful than binary thinking. Directional thinking doesn’t feel safe. It doesn’t attempt to resolve any tension. But it appreciates the messy reality as it is — which is more practical in understanding where our real threats lie. The actual and much more imminent threat to the wellbeing of our society is, in fact, the misuse of Machine-Human-Symbiosis. To put into plain English, it means machine plus human (instead of machine versus/or human).
Daniel Kahneman, 2011 Nobel Memorial Prize in Economic Science laureate, illustrates in his brilliant book, “Thinking, Fast and Slow” that human brains form fast and slow thinking (or System I and II thinking). “Fast thinking” is ancient, rapid, instinctive, automatic and evolved from animalistic roots millions of years ago. “Slow thinking” is much more recent, deliberate, conscious, analytical and logical. So far, in most cases, machines outperform human in “slow thinking.” But when it comes to tasks which human perform intuitively without thinking, it is difficult for machines to imitate or simulate. In most cases, there are little economic incentives to develop that too. The low hanging fruit for AI are to enhance, complement and accelerate areas where we are slow and blind to achieve greater efficiency, accuracy and productivity.
Barring some rogue, obsessed billionaires who insist on throwing money at AGI R&D, the economic incentives determine that we will most likely direct resources toward enhancing human performance with machine capabilities through implants or wearables than realizing Artificial General Intelligence. Some examples of primitive Machine-Human-Symbiosis already exist. Imagine two children taking the same exam — one relies on the memories, cognitive and analytical abilities that come with her natural brain; while another can access Google and an iPhone. Now imagine that access becomes a pair of smart glasses or even a chip implanted in her frontal cortex.
Such enhancements would no doubt come with a cost. When there is a cost, it means billions of people at the bottom will not access it easily, which turns such enhancements into an unfair advantage for those with the required means. Recently, making such unfair advantages more permanent became closer to reality.
In 2012, European and U.S. scientists developed easily programmed molecules for cutting DNA and unleashed a powerful gene-editing tool called CRISPR-Cas9. Six years later, a rogue Chinese scientist, He Jiankui, shocked the world by announcing that he used CRISPR and altered the genes of human embryos to produce the first “designer babies,” a set of HIV resistant twin girls. In a video, He claimed that the father was HIV-positive, therefore, the procedure was ethical, humane and necessary. Other than He, nobody has met or knows the identity of the twin babies and the parents. The world can only judge the morality of this dangerous precedent on its face value. CRISPR is still error-prone and can make unwanted edits that introduce harmful mutations. But the temptation to create stronger, disease resistant, better looking and smarter offspring is now tangible.
Many conflicts in human history have deep roots in the tension between the “haves” and the “have-nots.” The inequalities in human societies are compound. Social, economic and natural biological inequalities intertwine with human greed, vanity, selfishness and insecurity. The world can barely manage the imbalances we have today. Compounded by unethical or irresponsible AI and genome editing, the tension between haves and have-nots would be inexorable. Artificial biological inequalities are the dangerous scenario we all have to insist on avoiding.
According to the ancient Greek poet Hesiod, when Prometheus stole fire from heaven, Zeus, the king of the gods, took vengeance by presenting Pandora to Prometheus’ brother Epimetheus. Pandora opened a jar left in his care containing sickness, death and many other unspecified evils, which were then unleashed into the world. Though she hastened to close the container, only one thing was left behind — some translated as Hope, others prefer a pessimistic version as “deceptive expectation.”
Our own Pandora’s box is now half open. It is the time to question what should be the acceptable and responsible technological construct of personhood. Before we jump on dramatic discussions on our existential risks from hostile superintelligent robots, we need to take a hard look at the real possibility of existential irrelevance for the people at the bottom in every single society. The hope or “deceptive expectation” left in Pandora’s jar is wishful thinking, not a strategy. We can not rely on the leaders or regulators to determine our children’s future. It will take every single one of us to voice and demand ethics in all technological practices. It will start by helping the public that is still largely AI-illiterate to understand what’s possible and what’s not.
Hence I’m starting this column with Caixin Global in the hope of sharing some useful insights, asking some hard questions, debunking binary thinking, and even suggesting some solutions when I truly have an answer.
Because I have seen the enemy, it is us.
Jennifer Zhu Scott is an entrepreneur and named as one of Forbes World’s Top 50 Women in Tech in 2018. She serves World Economic Forum’s Future of Blockchain Council as a Council Member. Jennifer is a China Fellow of Aspen Institute and an Associate Fellow of The Royal Institute of International Affairs (Chatham House). Twitter: @jenzhuscott
Mar 21 17:12
Mar 21 17:16
Mar 21 16:52
Mar 21 15:49
Mar 20 19:25
Mar 20 18:20
Mar 20 17:07
Mar 20 16:06
Mar 20 16:43
Mar 20 14:49
Mar 20 14:36
Mar 20 12:35
Mar 20 01:57
Mar 19 20:56
- 1China Passes Landmark Foreign Investment Law
- 2Update: Tax Cuts Mean Governments Need to Tighten Their Belts: Premier Li
- 3China Became Net Importer of Rare Earths in 2018
- 4Uxin Shares Plunge as Revenue Surge Fails to Yield Profit
- 5Popular WeChat Account Valued at 2 Billion Yuan Snapped Up By Education Firm
- 1Power To The People: Pintec Serves A Booming Consumer Class
- 2Largest hotel group in Europe accepts UnionPay
- 3UnionPay mobile QuickPass debuts in Hong Kong
- 4UnionPay International launches premium catering privilege U Dining Collection
- 5UnionPay International’s U Plan has covered over 1600 stores overseas