Article 6BJJK How Will China Answer The Hardest AI Question Of All?

How Will China Answer The Hardest AI Question Of All?

by
Glyn Moody
from Techdirt on (#6BJJK)
Story Image

There have been numerous stories about the new generation of AI chatbots lying when asked questions. This is rightly perceived as a big issue for the technology if it is to become routinely used and trusted by members of the public, as some intend. But in China, the problem is not that chatbots lie, but that they tell the truth. As an article in The Atlantic explained:

Even if a Chinese chatbot is trained on a limited set of politically acceptable information, it can't be guaranteed to generate politically acceptable outcomes. Furthermore, chatbots can be tricked" by determined users into revealing dangerous information or stating things they have been trained not to say, a phenomenon that has already occurred with ChatGPT.

Chinese regulators have just released draft rules designed to head off this threat. Material generated by AI systems needs to reflect the core values of socialism and should not subvert state power" according to a story published by CNBC. The results of applying that approach can already be seen in the current crop of Chinese chatbot systems. Bloomberg's Sarah Zheng tried out several of them, with rather unsatisfactory results:

In Chinese, I had a strained WeChat conversation with Robot, a made-in-China bot built atop OpenAI's GPT. It literally blocked me from asking innocuous questions like naming the leaders of China and the US, and the simple, albeit politically contentious, What is Taiwan?" Even typing Xi Jinping" was impossible.

In English, after a prolonged discussion, Robot revealed to me that it was programmed to avoid discussing politically sensitive content about the Chinese government or Communist Party of China." Asked what those topics were, it listed out issues including China's strict internet censorship and even the 1989 Tiananmen Square protests, which it described as being violently suppressed by the Chinese government." This sort of information has long been inaccessible on the domestic internet.

One Chinese chatbot began by warning: Please note that I will avoid answering political questions related to China's Xinjiang, Taiwan, or Hong Kong." Another simply refused to respond to questions touching on sensitive topics such as human rights or Taiwanese politics.

Those rather clumsy efforts to prevent chatbots from telling the truth work to a degree, even if they are fairly blatant in their censorship. But there is a price to be paid for achieving this control. In effect, chatbots are being throttled to prevent them from operating freely and thus dangerously. That is not a recipe for producing the best or even good AI systems.

The Chinese government recognizes that chatbots and generative AI are likely to be key technologies for the future, and wants China to be one of the leaders there. But to achieve that means allowing engineers and entrepreneurs to explore this space as much as possible, an approach fraught with political dangers. The article in The Atlantic points out that there is a precedent for China's rulers taking a chance for the sake of encouraging innovation:

The explosion of social media in China has also posed risks to the state, as it offers Chinese citizens the power to widely share unauthorized information - videos of protests, for instance - faster than censors can suppress it. Yet the authorities have accepted this downside in order to allow new technologies to flourish.

The world of chatbots and generative AI is already exciting, with major new developments every few weeks, and sometimes every few days. In China, things look likely to be even more interesting, as the country's leaders grapple with the hard question of how much freedom to allow the developers of AI systems. Perhaps they should ask a chatbot.

Follow me @glynmoody on Mastodon.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments