Article 564RP Legal Risks of Adversarial Machine Learning Research

Legal Risks of Adversarial Machine Learning Research

by
Fnord666
from SoylentNews on (#564RP)

upstart writes in with an IRC submission:

Legal Risks of Adversarial Machine Learning Research:

Adversarial machine learning (ML), the study of subverting ML systems, is moving at a rapid pace. Researchers have written more than 2,000 papers examining this phenomenon in the last 6 years. This research has real-world consequences. Researchers have used adversarial ML techniques to identify flaws in Facebook's micro-targeting ad platform, expose vulnerabilities in Tesla's self driving cars, replicate ML models hosted in Microsoft, Google and IBM, and evade anti-virus engines.

Studying or testing the security of any operational system potentially runs afoul of the Computer Fraud and Abuse Act (CFAA), the primary federal statute that creates liability for hacking. The broad scope of the CFAA has been heavily criticized, with security researchers among the most vocal. They argue the CFAA - with its rigid requirements and heavy penalties - has a chilling effect on security research. Adversarial ML security research is no different.

In a new paper, Jonathon Penney, Bruce Schneier, Kendra Albert, and I examine the potential legal risks to adversarial Machine Learning researchers when they attack ML systems and the implications of the upcoming U.S. Supreme Court case Van Buren v. United States for the adversarial ML field. This work was published at the Law and Machine Learning Workshop held at 2020 International Conference on Machine Learning (ICML).

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments