Whose voice is it anyway? A growing debate over AI-generated content
Who has stolen your voice?
This was the main headline of an investigative story published by a leading news agency in China on February 12. The subtitle: A look into the abuse of AI-generated voices.
Xinhua news agency's latest probe into the issue comes at a time when China has stepped up its fight against infringement of personality rights involving AI-generated content, especially the unlawful reproduction of people's voices on many an Internet platform.
"What you hear is not necessarily true," said the Xinhua report. "Internet platforms are permeated with AI-generated (human) voices, and the abuse of such voices has become quite controversial at times."
A college student told Xinhua that he had enjoyed listening to a famous singer's songs for some time, but until recently it had never occurred to him that all those songs were "sung" by music apps made with the aid of artificial intelligence sound synthesis technology.
Xinhua quoted Niu Shaodong, an engineer with the Chinese Academy of Sciences, as saying that one can hardly tell AI-generated voices from real human voices nowadays, while only a couple of years ago many synthesized sounds were relatively easier to detect.
As Xinhua discovered, the emergence of myriad sound synthesis apps has made it possible now to "clone" a person's voice in just a few seconds, and such "cloned" voices have permeated music and news channels on certain video platforms, generating a considerable amount of "likes" from users. The Xinhua story didn't say whether these users know they've been "fooled" by AI-generated voices.
It's not just about "stealing" your voice with AI technology. In certain cases, Xinhua noted, vulgar words were found to have been uttered by these AI-generated voices.
If unchecked, such misplaced use of AI-generated voices would hardly bode well for individual or collective ethos. Good news is Chinese courts have meted out due punishments for the malpractice.

On January 17, the Supreme People's Court and China Central Television (CCTV) jointly organized a nationwide publicity campaign to promote public awareness of major legal cases decided in the previous year. The public was invited to vote on 49 candidate cases, top 10 of which would eventually be selected for the year 2024.
On February 17, I checked the 49 cases published on a CCTV video platform (www.v.cctv.com) and found that the fifth was associated with AI infringement of the personality rights of a voice actress.
It was the first such case decided in China, which made it clear that one's voice, covered by one's personality rights, is entitled to legal protection even if the voice has been adapted with the aid of AI technology.
It so happened that a certain Ms Yin, a professional voice actress, found in early 2024 that her voice – treated with AI technology – had been used in certain films and video games. She discovered later that a certain cultural communication company, with which she had previously cooperated in producing some audio books using her voice, had sold those audio products to a software firm without her permission. The software firm was found to have made and sold some new products based on her AI-adapted voices.
Despite defendants' argument, the Beijing Internet Court ruled in favor of the plaintiff last April, stating that AI-adapted human voice, so long as it is recognizable as someone's voice, is also protected as part of someone's personality rights.
In recommending the case as a candidate for one of the most influential court rulings in 2024, the organizers of the publicity campaign explained that it demonstrates a positive judicial attitude toward protecting personality rights while promoting tech for good.
Tech must be for good, indeed. Otherwise, the more developed technology is, the more easily it may be used to encroach upon people's legal rights.
Despite the landmark court decision last April to extend personality rights protection to certain AI-adapted voices, more collective efforts are still needed to nip AI infringement of one's voice rights in the bud. In particular, Internet platforms should do more to help prevent their users from uploading or spreading audio visual products made with AI-generated voices that apparently violate people's personality rights.
For instance, Zhang Wenhong, a well-known Shanghai doctor and expert on infectious disease, found at the end of last year that his image and voice had been illegally adapted in an commercial ad, so that it looked as if he were really hawking a kind of protein bar. Many old people who had watched the fraudulent ad believed it was Dr Zhang himself who was praising and promoting the protein bars, so they placed orders without hesitance.
Dr Zhang said he had complained to relevant platforms but in the end it was hard to hunt down the perpetrators who were quick to change their accounts.
This is where enhanced public awareness comes in. The more publicity campaigns from courts and media to alert the public of AI infringement of rights, the better.
At the same time, Internet platforms should improve their capability to screen out dubious contents that border on rights violation before – not after – those contents cause substantial damage to people's rights and interests. After all, platforms are not always neutral avenues shouldering no responsibility for whatever their users do.
