Article 6C06D AI Everywhere, All at Once

AI Everywhere, All at Once

by
Harry Goldstein
from IEEE Spectrum on (#6C06D)
an-illustration-of-people-walking-in-a-c

It's been a frenetic six months since OpenAI introduced its large language model ChatGPT to the world at the end of last year. Every day since then, I've had at least one conversation about the consequences of the global AI experiment we find ourselves conducting. We aren't ready for this, and by we, I mean everyone-individuals, institutions, governments, and even the corporations deploying the technology today.

The sentiment that we're moving too fast for our own good is reflected in an open letter calling for a pause in AI research, which was posted by the Future of Life Institute and signed by many AI luminaries, including some prominent IEEE members. As News Manager Margo Anderson reports online in The Institute, signatories include Senior Member and IEEE's AI Ethics Maestro Eleanor Nell" Watson and IEEE Fellow and chief scientist of software engineering at IBM, Grady Booch. He told Anderson, These models are being unleashed into the wild by corporations who offer no transparency as to their corpus, their architecture, their guardrails, or the policies for handling data from users. My experience and my professional ethics tell me I must take a stand...."

Explore IEEE AI ethics and governance programs

IEEE CAI 2023 Conference on Artificial Intelligence, June 5-6, Santa Clara, Calif.

AI GET Program for AI Ethics and Governance Standards

IEEE P2863 Organizational Governance of Artificial Intelligence Working Group

IEEE Awareness Module on AI Ethics

IEEE CertifAIEd

Recent Advances in the Assessment and Certification of AI Ethics

But research and deployment haven't paused, and AI is becoming essential across a range of domains. For instance, Google has applied deep-reinforcement learning to optimize placement of logic and memory on chips, as Senior Editor Samuel K. Moore reports in the June issue's lead news story Ending an Ugly Chapter in Chip Design." Deep in the June feature well, the cofounders of KoBold Metals explain how they use machine-learning models to search for minerals needed for electric-vehicle batteries in This AI Hunts for Hidden Hoards of Battery Minerals."

Somewhere between the proposed pause and headlong adoption of AI lie the social, economic, and political challenges of creating the regulations that tech CEOs like OpenAI's Sam Altman and Google's Sundar Pichai have asked governments to create.

These models are being unleashed into the wild by corporations who offer no transparency as to their corpus, their architecture, their guardrails, or the policies for handling data from users."

To help make sense of the current AI moment, I talked with IEEE Spectrum senior editor Eliza Strickland, who recently won a Jesse H. Neal Award for best range of work by an author for her biomedical, geoengineering, and AI coverage. Trustworthiness, we agreed, is probably the most pressing near-term concern. Addressing the provenance of information and its traceability is key. Otherwise people may be swamped by so much bad information that the fragile consensus among humans about what is and isn't real totally breaks down.

The European Union is ahead of the rest of the world with its proposed Artificial Intelligence Act. It assigns AI applications to three risk categories: Those that create unacceptable risk would be banned, high-risk applications would be tightly regulated, and applications deemed to pose few if any risks would be left unregulated.

The EU's draft AI Act touches on traceability and deepfakes, but it doesn't specifically address generative AI-deep-learning models that can produce high-quality text, images, or other content based on its training data. However, a recent article in The New Yorker by the computer scientist Jaron Lanier directly takes on provenance and traceability in generative AI systems.

Lanier views generative AI as a social collaboration that mashes up work done by humans. He has helped develop a concept dubbed data dignity," which loosely translates to labeling these systems' products as machine generated based on data sources that can be traced back to humans, who should be credited with their contributions. In some versions of the idea," Lanier writes, people could get paid for what they create, even when it is filtered and recombined through big models, and tech hubs would earn fees for facilitating things that people want to do."

That's an idea worth exploring right now. Unfortunately, we can't prompt ChatGPT to spit out a global regulatory regime to guide how we should integrate AI into our lives. Regulations ultimately apply to the humans currently in charge, and only we can ensure a safe and prosperous future for people and our machines.

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/IeeeSpectrum
Feed Title IEEE Spectrum
Feed Link https://spectrum.ieee.org/
Reply 0 comments