AI Now's annual report: stop doing "emotion detection"; stop "socially sensitive" facial recognition; make AI research diverse and representative -- and more
Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments.
This year's report is especially important, as algorithmic discrimination, junk science, bad labor practices, and inappropriate deployments have gained salience and urgency.
The Institute's top recommendations are:
* Stop deploying "emotion detection" systems ("governments should specifically prohibit use of affect recognition in high-stakes decision-making processes"). These are junk science ("built on markedly shaky foundations") but they're being used for everything from medical care to insurance to student performance evaluation.
* Stop using facial recognition in "sensitive social and political contexts" ("including surveillance, policing, education, and employment - where facial recognition poses risks and consequences that cannot be remedied retroactively").
* Fix the industry's diversity problem "to address systemic racism, misogyny, and lack of diversity."
* Expand bias research beyond technical fixes: "center 'non-technical' disciplines whose work traditionally examines such issues, including science and technology studies, critical race studies, disability studies,and other disciplines keenly attuned to social context" (see: "second-wave algorithmic accountability")
* Mandatory disclosure of AI industry's climate impact: "Disclosure should include notifications that allow developers and researchers to understand the specific climate cost of their use of AI infrastructure."
* Give workers the right to "contest exploitative and invasive AI" with the help of trade unions: "Workers deserve the right to contest such determinations [by "AI-enabled labor-management systems"], and to collectively agree on workplace standards that are safe, fair, and predictable."
* Give tech workers the right to know what they're working on and to "contest unethical or harmful uses of their work": "Companies should ensure that workers are able to track where their work is being applied, by whom, and to what end."
* Expand biometric privacy rules for governments and private actors: A call to universalize Illinois's world-beating Biometric Information Privacy Act (BIPA).
* Regulate "the integration of public and private surveillance infrastructures": From smart cities to neighborhood surveillance (Looking at you, Ring) to military/government surveillance contracts for Big Tech, to Belt-and-Road and other digital imperialism projects to build remote-controlled surveillance tech into client states' governance. "We need strong transparency, accountability, and oversight in these areas, such as recent efforts to mandate public disclosure and debate of public-private tech partnerships, contracts, and acquisitions."
* Use Algorithmic Impact Assessments to "account for impact on climate, health, and geographical displacement."
* Researchers should "better document the origins of their models and data" to "account for potential risks and harms": "Advances in understanding of bias, fairness, and justice in machine learning research make it clear that assessments of risks and harms are imperative."
* No health-related AI research without informed consent: "AI health systems need better informed-consent approaches and more research to understand their implications in light of systemic health inequities."
2019 Report [AI Now Institute]