In Depth: Pushback Against China Tech Giants Grows With Accusation of Algorithmic ‘Bullying’
A government-backed Chinese consumer group has accused the nation’s tech giants of using their data-based algorithms to “bully” consumers and put them at a disadvantage.
At a symposium on the topic held by the China Consumers Association (CCA) on Jan. 7, the group released a three-section, 14-point document outlining the ways data-driven algorithms impinge the rights of consumers in their interactions with large tech platforms, and calling for beefed up powers for regulators.
It’s the latest sign that the technology broadly described as “artificial intelligence” — and specifically algorithm-based advertising and sales — is shaping up as a new front in the country’s push to control big tech.
And it comes amid a broader national conversation about how tech giants use their technology to control the information available to individual consumers, and leverage their access to personal data for profit.
Among the grievances listed by the CCA are complex sales promotions that obscure the true costs of a product, targeted search results that create information asymmetry, and the practice of hiding negative reviews, which it says leaves consumers “squeezed by algorithms and the targets of technological bullying.”
Of particular concern to the group is the practice of “algorithmic price discrimination,” where the personal data of an online shopper is used to calculate different prices for different individuals based on what they might be willing to pay.
The CCA proposed a series of remedies, including establishing a special organization to police algorithmic ethics and investigate “unfair algorithms,” and equip government departments with the ability to regulate them.
It also controversially calls for tech giants to be forced to turn over what has been described as their “secret sauce” – the proprietary algorithms which underpin their businesses – to regulators in the case of disputes.
The document released by the government-backed group is seen as the latest evidence China is preparing to publish a draft regulation on artificial intelligence and algorithms.
The CCA called on “all sectors of society to work together for … the fair and reasonable application of algorithms, and prevent operators from using algorithms to do evil.”
The document from the semi-official group, which had a hand in the landmark 1993 consumer protection law but is these days better known for releasing reports on defective products and services around China’s annual consumer rights day, is not legally binding.
But Clement Chan, a legal professor at the University of Hong Kong who specializes in Chinese data law, said the content of the document is “meticulous” and may be “referred to by judges, and may have some persuasive basis.”
“It might also be a hint about what regulators are considering. Or just let the consumer rights association test the waters and see how the general public and in particular the (tech) industry responds,” he said.
Chan further said it was not clear how a provision in the guidelines that such algorithms should promote “socialist core values” might play out in practice.
A number of legal observers told Caixin they expect one of the Chinese bodies with jurisdiction over tech firms — such as the Cyberspace Administration of China or the Ministry of Industry and Information Technology — to release a draft regulation on algorithms and artificial intelligence later this year.
Chan said the push should be seen in the context of a widespread crackdown on how private firms use personal data in China. But the CCA document could also suggest regulators may be examining the use of consumer protection law to further control big tech.
“It’s like how in the U.S., competition (law) is a very useful legal weapon for regulating internet giants,” Chan said.
Consumers in the dark
Also intriguing, said Nicolas Bahmanyar, a data privacy consultant at Leaf Law Firm, is that if the recommendations are incorporated into a regulation, it would place the burden of proof on tech companies to demonstrate their algorithms are not targeting people unjustly.
“It’s a very interesting message … the consumer association is saying we understand the (ill) effects, but we don’t understand the causes, and we want companies to explain the causes to us.”
“Usually if you claim something bad has happened, it’s on you to prove it. The logic of the consumer association is that it should be on the tech company to explain. It’s really shining a light on how complex this technology is, and how ill-informed the user base is,” he said.
Bahmanyar said he expects a draft regulation “on AI security and ethics” to be released in China by the end of the year. “Because everything is in place — we have the white paper from 2019, we have guidelines from the TC260, we now have the push from consumers. The next logical step is for the regulator to issue a draft.”
The TC260 is a government committee which sets national standards on cybersecurity and data protection.
China’s draft Personal Information Protection Law, which is currently working its way through the legal process, also has stipulations on the impact of algorithms on individual rights, with the current draft stating that “the use of personal information for automated decision-making shall be done transparently, fairly and reasonably.”
With new restrictions threatening to eat into the underlying business model of the entire internet industry — which is heavily reliant on algorithmic recommendations and algorithm-driven advertising to support free services — it remains to be seen whether there will be a backlash from an industry already hobbled by antitrust probes and a newfound regulatory interest in data protection.
“Users must protect their privacy, and companies must make money,” said Fang Yu, director of the Internet Law Research Center of the China Academy of Information and Communications Technology. “Privacy protection has a cost, and who will pay that cost?” It was important to find a balance, he said.
An internet advertising industry insider, who asked not to be identified so they could speak freely, said data use by Chinese tech companies had become laissez faire and chaotic, and it would not be easy to reign in. “A one size fits all solution will not work,” they said.
“At the moment I don’t think there’s a clear threshold on what’s healthy and what’s not — anything goes,” said Bahmanyar. “What’s needed is to define what’s healthy, what’s acceptable, and then enforcement. Everyone agrees that we should put signs up, but nobody has worked out what they should say.”
Due to reporter error, a previous version of the story gave the incorrect year for the release of an AI white paper.
Contact reporter Flynn Murphy (email@example.com) and editor Michael Bellart (firstname.lastname@example.org)
Download our app to receive breaking news alerts and read the news on the go.
- MOST POPULAR