Red Alert – Security Risks in Chinese AI, Says ASPI
The Australian Strategic Policy Institute (ASPI) has called for heightened scrutiny of Chinese AI-enabled products, drawing parallels with the concerns about Chinese 5G equipment.
Governments in the West have long regarded Chinese-made 5G equipment as a security risk, prompting bans and expensive replacement initiatives.
Fears center around potential backdoors for espionage and the obligation of Chinese companies to comply with Beijing's demands, a claim consistently denied by Huawei.
The RiskIn the recently published report De-risking Authoritarian AI," Simeon Gilding from ASPI raised concerns about AI-enabled products. He claimed that such products pose a greater risk than 5G and are more challenging to mitigate.
While 5G is primarily limited to the telecommunications sector, AI's influence spans all aspects of human life.
The report emphasizes that the focus on China as a potential threat is due to its status as a technology powerhouse with cross-border reach and ambitions.
While regulation of AI is underway, ASPI emphasizes the need to focus on AI-enabled products and services from authoritarian countries, which might be overlooked in the process.China's export of advanced and competitively priced AI-enabled technology to democracies raises security concerns. Specifically, India is highlighted as it undergoes industrialization, making Chinese technology an attractive option for critical infrastructure needs. The shared border with nuclear power China further increases the risk.
ASPI recommends considering lower regulation thresholds for India due to the sensitive security landscape. However, implementing AI-enabled technologies and systems can lead to automatic internet and software updates, often beyond end-users' awareness.
The report warns that comprehensive control becomes nearly impossible with AI's pervasive presence, from influencing online behavior to gatekeeping job and credit access. Even managing complex systems like traffic and maritime operations adds to the challenges.
A complete prohibition of all Chinese AI-enabled technology is deemed costly and disruptive. This prompts the exploration of alternative approaches.
Proposed RemediesASPI proposes a three-part framework to address the issue: auditing, red teaming, and regulation of AI-enabled products.
Auditing involves assessing the criticality of a system to freedom of speech, public health, and safety, essential services, democratic processes, etc. Additionally, the exposure scale to a product or service would be evaluated.
If severe vulnerabilities are uncovered and cannot be mitigated, a general ban on the product might be deemed necessary.Red teaming is another crucial aspect, aiming to identify internal risks in the system. The report cites TikTok as a subject for red teaming. In this process, cybersecurity professionals would evaluate potential vulnerabilities and the risk of exploiting the platform for spying activities.
ASPI's recommendations include prohibiting Chinese AI-enabled technology in certain parts of the network. They also include banning government procurement, using such products, or even a comprehensive prohibition. In addition, redundancy arrangements and public education efforts are being considered.
Overall, the report strongly emphasizes the need for proactive monitoring and regulation of AI-enabled products from authoritarian countries. It specifically focuses on China's growing global AI technology capacity and ambitions as a potential area of concern.
The post Red Alert - Security Risks in Chinese AI, Says ASPI appeared first on The Tech Report.