While police use of artificial intelligence is achieving impressive results in analyzing data and solving cases, the frontier technology is also being used to commit new crimes and is presenting fresh challenges for actual police officers.
In October, the Kunshan public security bureau in Jiangsu province said its AI team had played a crucial role in the detection and prevention of crimes, as the cutting-edge technology is able to promptly recognize suspicious activities and issue warnings.
The bureau highlighted the remarkable role of AI in combating telecom fraud, revealing one case earlier this year in which its AI team helped a local who had been cheated out of 980,000 yuan ($135,300).
The victim's statement was transmitted to the AI team's analysis center. Within 10 minutes, they had traced the flow of funds, successfully halting the transfer of 500,000 yuan and leading to the capture of nine suspects.
READ MORE: Generative AI product user base in China reaches 230m
In Kunming, Yunnan province, however, police officers recently prevented an attempted scam in which the suspect used AI face-sweeping software to pretend he was a close friend of a local woman surnamed Wang. The suspect was planning to lure her to Guangdong province to deliver gold bars worth more than 300,000 yuan under the pretext of an emergency, according to a Guangming Daily report.
"The cases tell us that the safety of AI is as important as its development and application, which needs greater attention from all walks of life and stronger oversight to prevent the misuse and abuse of the technology," said Zheng Ning, head of the Law Department at the Communication University of China's Cultural Industries Management School.
She praised the wide application of AI to save time in searching and analyzing information, but also emphasized the importance of proper supervision of the technology.
Public concerns on security, privacy and authenticity related to AI have been rapidly growing, "making lots of countries, including us, have to follow up its development and draw the boundaries of what can be done and what must not be done," she added.
New risks
In October, a series of videos that used AI to imitate the voice of Lei Jun, founder and CEO of Chinese tech giant Xiaomi, went viral online, with the fake Lei seen commenting on hot social issues.
The real Lei said he was troubled by the videos, adding "I don't think using AI in this way is a good thing".
It was not the first time that someone has felt aggrieved after his or her voice was imitated by AI without permission.
In April, the Beijing Internet Court heard a lawsuit in which a voice-over artist surnamed Yin claimed that her voice had been used without her consent in audiobooks circulating online. The voice was processed by AI, the lawsuit said.
Yin sued five companies including a cultural media corporation that had provided recordings of her voice for unauthorized use, an AI software developer, and a voice-dubbing app operator.
After an investigation, the court found that the cultural media company sent Yin's recordings to the software developer without her permission. The software company then used AI to mimic Yin's voice and offered the AI-generated products for sale.
Zhao Ruigang, vice-president of the court and also the presiding judge in the case, said that the AI-powered voice mimicked Yin's vocal characteristics, intonation and pronunciation style to a high degree, adding "this level of similarity allowed for the identification of Yin's voice."
He cited the Civil Code, ruling that the behaviors of the cultural media enterprise and the AI software developer constituted an infringement of Yin's voice right.
The two defendants were ordered to pay her 250,000 yuan in compensation. The other companies, however, were not held liable for the infringement as they unknowingly used the AI-generated voice products, he said.
After announcing the verdict, Zhao said that the growing use of AI technology across various fields had raised new risks regarding personal rights, and called for tightening supervision of the technological service providers and platforms under specific provisions in current laws.
Some AI fraudsters have also taken advantage of AI-generated rumors on disasters and diseases, which has disturbed the order in cyberspace and caused public panic.
The number of economic and enterprise rumors generated by AI increased by 99 percent in the past year, according to a report released by a Tsinghua University research center in April.
Regulatory controls
The large amount of polluted information on AI platforms and abuse of AI-generated content, have made more people realize that what they see is not necessarily real, prompting them to demand stronger oversight and more comprehensive management of the technology.
In August 2023, China issued an interim regulation to manage AI-generated services and products, aiming to safeguard national security and protect people's legitimate rights and interests, while promoting development of the technology.
The regulation, which was jointly formulated by seven authorities, including the Cyberspace Administration of China, the Ministry of Public Security and the Ministry of Science and Technology, highlights the protection of personal data and intellectual property.
It requires AI-generated service providers to improve the accuracy and reliability of generated information and to label such content. The interim regulation also requires regular security assessments as well as measures to prevent juveniles from becoming addicted to AI services.
In August this year, the European Union gave final approval to an AI law that takes a "risk-based approach" to products and services that use AI, stating that the riskier an AI application is, the more scrutiny it faces.
The law stipulates that AI-generated deepfake pictures, video and audio of existing people, places, and events must be labeled as artificially manipulated.
Earlier, Brussels also suggested broader AI rules, while some US states are working on their own AI legislation.
Similar legislative guardrails are being considered in countries around the world, as well as by global groups such as the United Nations and the Group of Seven industrialized nations.
Striking a balance
Compared with rules and guidelines made by other nations and organizations, China's AI management is more like a tool kit, said Zhu Wei, deputy head of the Communication Law Research Center at the China University of Political Science and Law.
"Instead of putting all AI-related content into an individual law, our AI management can be seen in many laws and rules, such as the Civil Code and the interim regulation," he said.
"We're building a legal system to develop, manage and supervise AI."
Zheng, from the Communication University of China, said that China's management of AI is more flexible, which not only clarifies the boundaries for operators and users, but also leaves them more space for innovation and development.
"The bottom line is to neither damage national security and data security, nor bring damage to others," she said.
The development and application of AI technology cannot violate the National Security Law, the Data Security Law, the Cybersecurity Law and the Law on Personal Information Protection, according to Zheng.
"Under such a legal framework, it is crucial to refine rules for AI development in some major areas, such as finance, education, transportation and medical care to meet more needs of the people and industries," she said.
ALSO READ: AI creates more jobs but with higher entry threshold
"It's not easy to achieve a balance between the security and development of the emerging technology, so it's urgent and necessary for more walks of life, including government agencies, judicial authorities, internet platforms and the public, to jointly participate in the management," she added.
Wang Sixin, a professor of internet law at the Communication University of China, said that seeking this balance is not achievable overnight. It requires long-term effort and refinement, as there are many uncertainties in technological development.
"Technological developments allow us to discover new problems and also urge us to find ways to solve them," he said.
"The management and supervision of AI and its generative content is always on the move. It's a process that constantly needs to be improved," he said. "Therefore, relevant rules or provisions should be made flexibly and not be too detailed."
Wang compared AI to a knife that can cut both vegetables and meat. "The key is what the person using the knife wants to do," he said, adding this is why management and oversight in China focus more on how to enhance the legal and security awareness of AI developers, operators and users.